Intel · AMD · Compute · The Register
Memory godboxes could offer relief from the RAMpocalypse
Compiled by KHAO Editorial — aggregated from 1 outlet. See llms.txt for citation guidance.
◌ Single Source
In modern datacenters, storage can live anywhere, local to the machine, remotely accessed over the network, and/or shared between systems.
Key facts
- On the performance end of things, CXL 3.0 moves to PCIe 6.0 as a baseline, which provides 16 GB/s of bidirectional bandwidth per lane
- Assuming 64 lanes of CXL per CPU, that works out to an additional 512 GB/s of bandwidth
- The 2.0 spec, which showed up in 2020, added basic support for switching, which meant memory could be pooled and then allocated to any number of connected systems
- The 1.0 spec opened the door to memory expansion modules, which allow you to add more memory by slotting them into a CXL-compatible PCIe slot
Summary
Amid the AI-fueled memory crunch, will Compute Express Link finally have its moment to shine? The next generation of servers will treat system memory in much the same way. The ongoing DRAM shortage has created a perfect storm for the proliferation of the appliances, which not only allow for memory to be pooled, but also data stored in that memory to be shared by multiple machines simultaneously. More importantly, your next round of servers will probably support the tech, if they don't already.