
Even while facing competition from other CPU interconnect technologies, Compute Express Link (CXL) solutions are forecast to see significant sales gains, according to a new report by semiconductor research firm Objective Analysis. The firm forecasts that CXL products will grow at a robust clip to reach an impressive $3.4 billion by 2028.
CXL is an umbrella term that refers to the technical protocols and systems used to connect CPUs within a computer network, including peripherals, andโespeciallyโprocessor chips and memory. CXL enables the high speed data transmission required of AI and machine learning workloads. The technology is known for its low latency and high bandwidth, and itโs also becoming more cost effective than in previous years.
A key reason for CXLโs expected sales growth is that they provide advanced support for AI deployments and yet also serve efficiently for legacy data centers. In addition to Objective Analysis’ forecast, tech research firm ResearchPivot predicts that the CXL controller market will see a torrid 18.2% CAGR between 2026 and 2033, reaching $5.4 billion annually.
Manufacturers can embed CXL technology into servers for about $60, though some configurations are higher priced. CXL switch units are typically more expensive due to their more advanced menu of features, for example x256 lanes. But pricing for both solutions is expected to fall as vendors ramp up production based on forecasts of continued greater adoption.
CXL is particularly valued for its ability to handle challenges like memory and bandwidth limitations in data centers.ย If a given server is running low on memory due to a heavy compute demand, a CXL connection between itself and another server could be used to share memory resources. ย
Other key advantages of CXL include their ability to support interconnections within heterogenous data center environments, in which an enterprise has built a network that connects gear from several vendors. Also, PCIe standards, which CXL uses, are seeing increasing adoption, with the upcoming PCIe 6.0 standard expected to be a key partner technology. Adding the most to their adoption, the CXL technology is continually adapting and becoming a suitable fit for more compute situations.
There are plenty of competing CPU interconnect technologies. Intelโs Ultra Path Interconnect (UPI) also offers a low latency connection between scalable multi-processor compute systems. Itโs used in the Intel Xeon processors. UCIe is an open standard still gaining adoption, geared for chiplet-to-chiplet (a small, modular integrated circuit) connections.
HyperTransport (HT) allows for different link widths to be combined within a single data center system, a major selling point for todayโs mixed computing environments. UFS is considered a reliable stand-by for the growing flash market, known for its ability to aid compatibility and machine performance.
Arguably the interconnect standard to watch is NVIDIAโs NVLink, a high-speed standard that allows groups of GPUs in a rack to crunch data as if they are a single unit. NVLink supports 1.8 TB/s of bandwidth, ideal for AI workloads. NVIDIA recently opened up this proprietary standard so that other vendors can leverage its high speed, including accelerators not made by NVIDIA.