Compute Express Link, or CXL, is dramatically changing the way computer systems use memory. Tutorials at the IEEE Hot Chips conference and the recent SNIA Storage Developers Conference explored how CXL works and how it will change the way we do computing. Also, recent announcements by Colorado startup, IntelliProp, on their Omega Memory Fabric chips pave the way for CXL implementations to enable memory pooling and composable infrastructure.
The primary applications for CXL were for memory expansion for individual CPUs, but CXL would have the most impact when sharing different types of memory technology (DRAM and non-volatile memory) between CPUs. The figure below (from the CXL Hot Chips tutorial) shows how memory can be shared with CXL.
As Samsung Electronics VP Yang Seok Ki said at SNIA SDC, CXL is an industry-supported cache-enabled interconnect processor for memory expansion and accelerators. CXL versions 1.0 and 2.0 were released (working with PCIe 5.0) and in early August, at the Flash Memory Summit, CXL version 3.0 was released which works with the faster PCIe 6.0 interconnect. CXL 3.0 enables multi-level switching and memory fabric and peer-to-peer direct memory access.
The presentation also described how CXL version 2.0 enables medium memory available via local CXL connections to a CPU and remote memory via CXL version 3.0 switched networks, as shown below.
The nearby memory is connected directly to the CPU. Some of the first CXL products available were medium memory expander products that provided additional memory to a CPU. CXL opens the door to memory tiering which offers a similar trade-off in performance and cost to storage tiering.
IntelliProp has just announced its Omega Memory Fabric chip. The chips incorporate the CXL standard with the company’s fabric management software and network attached memory (NAM) system. IntelliProp also announced three field-programmable gate array (FPGA) products that incorporate its Omega memory fabric. The company says its memory-agnostic innovation will help adopt composable memory that significantly improves data center energy consumption and efficiency. The company says its Omega Memory Fabric has the following features:
Features Omega Memory Fabric, CXL standard included
- Dynamic multi-pathing and memory allocation
- E2E security uses AES-XTS 256 w/ integrity addition
- Supports non-tree topology for peer-to-peer
- Management Scaling for large deployments using multi-fabric/subnet and distributed managers
- Direct memory access (DMA) allows data to move between memory layers efficiently and without locking the CPU core
- Memory agnostic and 10x faster than RDMA
Three FPGA solutions connect CXL devices to the CXL host and are an adapter, a switch, and a fabric manager. IntelliProp says ASIC solutions will be available in 2023 The company said the solutions connect CXL devices to CXL hosts, enabling data centers to increase performance, scale from dozens to thousands of host nodes, consume less energy as data travels with fewer hops, and. Mixed use of shared DRAM (fast memory) and shared SCM (slow memory).
CXL is poised to change the way memory is used in computer architectures as discussed in a 2022 Hot Chips tutorial and SNIA SDC. IntelliProp introduced three FPGA solutions to enable the company’s Open Memory Fabric technology and CXL enabled memory fabric.