Google · The Register
Google taps Intel for another round of custom network chips
Compiled by KHAO Editorial — aggregated from 1 outlet. See llms.txt for citation guidance.
◌ Single Source
Google will continue to work with Intel, buying SmartNICs for its public cloud rather than blazing its own trail as AWS has done with its Nitro NICs.
Key facts
- Xeons have been the CPU of choice for Nvidia's 8-GPU DGX reference designs going back to the H100 in 2022
- While Amazon employs custom ASICs from its Annapurna Labs team and Microsoft uses custom logic running on FPGAs, Google tapped Intel to develop an ASIC-based IPU called Mount Evans, which launched
- Intel didn't elaborate on what Google's next-gen IPUs might look like, but given the demand for high speed networking for AI compute clusters, there's a good chance it'll be significantly faster
- Alongside the expanded IPU collab, Intel was also keen to note that the Chocolate Factory wasn't giving up on its Xeon processors, which will power a variety of general purpose and AI workloads
Summary
Like most hyperscalers today, Google employs SmartNICs, or as Intel prefers to call them, infrastructure processing units (IPUs). While Amazon employs custom ASICs from its Annapurna Labs team and Microsoft uses custom logic running on FPGAs, Google tapped Intel to develop an ASIC-based IPU called Mount Evans, which launched alongside its C3 instances in 2022. On Thursday, Intel announced Google had expanded this collaboration to develop new IPUs that reads like a desperate attempt to convince the public that its Datacenter and Networking divisions are still relevant. Intel CFO David Zinsner had alluded to increased demand for these services during the company's Q4 earnings call in January, touting that the company's custom ASIC biz grew more than 50 percent in 2025 and exited Q4 at an annualized revenue run rate above $1 billion.