← Back to KHAO

Nvidia ·

Even Nvidia’s own research teams can't get enough GPUs

2 min read

Compiled by KHAO Editorial — aggregated from 1 outlet. See llms.txt for citation guidance.

◌ Single Source

Sharon Goldman.

This week, at the HumanX conference in San Francisco, the reporter discovered that even inside Nvidia, GPUs are scarce.

Key facts

Summary

Welcome to Eye on AI, with AI reporter Sharon Goldman. It’s been another one of those wild weeks in AI, with Anthropic electing not to release its new Claude Mythos model because of concerns about the cybersecurity risks it poses (and forming a coalition to use a preview version of the model to bolster cybersecurity defenses); Meta releasing its first AI model since hiring Alexandr Wang; and mounting expectations about OpenAI’s upcoming new “Spud” model. Most of these AI models run on Nvidia GPUs, the sophisticated and expensive AI chips (at over $30,000 a pop) that power their training and output. The reporter sat down with Bryan Catanzaro, who leads applied deep learning research at Nvidia, overseeing teams working on AI-driven graphics, speech recognition, and simulation.

Read full article at Fortune Technology →

#nvidia