Inference · The Information
Google in Talks With Marvell to Build New AI Chips for Inference
Compiled by KHAO Editorial — aggregated from 1 outlet. See llms.txt for citation guidance.
◌ Single Source
Google is in talks with Marvell Technology to develop two new chips aimed at running AI models more efficiently, according to two people with direct knowledge of the discussions.
Key facts
- Called a language processing unit, the chip is built on technology Nvidia licensed from startup Groq for $20 billion
- The other is a new TPU built specifically for running AI models
- Google is in talks with Marvell Technology to develop two new chips aimed at running AI models more efficiently, according to two people with direct knowledge of the discussions
- At its GTC conference in March, Nvidia released a chip designed to improve the efficiency of inference workloads
Summary
The other is a new TPU built specifically for running AI models. Called a language processing unit, the chip is built on technology Nvidia licensed from startup Groq for $20 billion.
This article is subscriber-only. Please subscribe to The Information to read.