Nvidia · Fortune Technology
Even Nvidia’s own research teams can't get enough GPUs
Compiled by KHAO Editorial — aggregated from 1 outlet. See llms.txt for citation guidance.
◌ Single Source
This week, at the HumanX conference in San Francisco, the reporter discovered that even inside Nvidia, GPUs are scarce.
Key facts
- GeekWire reported on Amazon CEO Andy Jassy's latest shareholder letter, which revealed that AWS’s AI business has already reached a $15 billion annual revenue run rate, which Jassy argued means
- July 6-11: International Conference on Machine Learning (ICML), Seoul, South Korea
- That's how many executives say their AI strategy is more about optics than any actual internal guidance, according to Writer's new 2026 Enterprise AI Adoption Report, which surveyed 2,400 knowledge
- A Meta employee created a dashboard so coworkers can compete to be the company’s No. 1 AI token user—and Zuckerberg doesn’t even rank in the top 250 –by Jacqueline Munis
Summary
Welcome to Eye on AI, with AI reporter Sharon Goldman. It’s been another one of those wild weeks in AI, with Anthropic electing not to release its new Claude Mythos model because of concerns about the cybersecurity risks it poses (and forming a coalition to use a preview version of the model to bolster cybersecurity defenses); Meta releasing its first AI model since hiring Alexandr Wang; and mounting expectations about OpenAI’s upcoming new “Spud” model. Most of these AI models run on Nvidia GPUs, the sophisticated and expensive AI chips (at over $30,000 a pop) that power their training and output. The reporter sat down with Bryan Catanzaro, who leads applied deep learning research at Nvidia, overseeing teams working on AI-driven graphics, speech recognition, and simulation.