AI · Datacenter Dynamics
Delivering this much power in AI data centers is a challenge, but common components are addressing the need
Compiled by KHAO Editorial — aggregated from 1 outlet. See llms.txt for citation guidance.
◌ Single Source
However, all this power turns into heat and removing that heat is no longer possible with traditional air cooling.
Key facts
- Today’s Nvidia Blackwell chips are more in the 1000-1400 watt range
- At Schneider Electric, they have detailed these liquid cooling challenges in Schneider Electric’s White Paper 210 Direct Liquid Cooling System Challenges in Data Centers, providing critical insights
- The latest NVIDIA designs are 142kW per rack and NVIDIA has publicly stated 1 MW per rack is on the horizon
- (By ‘rule of thumb,’ every 1C you can raise your chiller temperature translates to between 2-2.5 percent savings on your electrical efficiency)
Summary
The growth of graphics processing unit (GPU)-based accelerated computing that powers AI workloads is changing the data center architecture. This level of power consumption is driving up a common metric in their industry – power consumption per rack. Delivering this much power in AI data centers is a challenge, but common components are addressing the need. Liquid cooling comes in various methods, but direct liquid cooling (DLC), also known as direct-to-chip, has become the preferred technology to cool these chips. However, deploying direct liquid cooling at scale in AI data centers is new and introduces more complexity into a complex environment.