
NVIDIA has a fresh answer for data centers’ most pressing bottleneck: how to push beyond the walls of a single facility when power and cooling capacity are tapped out. The company’s new Spectrum-XGS Ethernet extends its networking stack so multiple data centers can function as a single, unified compute plane—an architecture that, in theory, will enable far greater output.
As NVIDIA touts it, scale-up and scale-out have carried data centers this far, and now Spectrum-XGS adds “scale-across.” By layering new distance-aware algorithms on top of the company’s existing Spectrum-X platform, NVIDIA claims it can keep latency predictable and bandwidth high when traffic travels city-to-city, or even continent-to-continent.
Notably, NVIDIA says Spectrum-XGS requires no new switching hardware. The improvement comes from enhanced algorithms: new control planes that better schedule and steer data traffic over distances. For data center managers reluctant to buy new hardware, that’s a big deal. In essence, XGS is a software-based improvement that rides today’s installed base.
NVIDIA claims Spectrum-XGS offers auto-tuned congestion control, precision latency management, and end-to-end telemetry. Together these features nearly double the performance of the company’s Collective Communications Library (NCCL) for multi-GPU, multi-node jobs stretched across geographic locations. In short: less jitter, fewer stalls, more tokens per second.
There’s also an industry battle going on here. While NVIDIA’s proprietary InfiniBand has long dominated AI back-ends, the Ethernet open standard is gaining momentum as enterprise buyers chase skills familiarity and a more flexible vendor ecosystem. Industry trackers report NVIDIA’s Ethernet gear as the fastest-growing switching line last year, and project tens of billions in data-center Ethernet spend over the next five years. Spectrum-XGS is aimed squarely at the shift to Ethernet, importing more of InfiniBand’s toolsets into the more popular Ethernet world.
NVIDIA says the platform delivers 1.6x the bandwidth density of standard Ethernet in multi-tenant AI environments, while holding service levels steady as workloads traverse longer links. To support this, Spectrum-XGS builds on Spectrum SN5600 switches and BlueField-class DPUs from the original Spectrum-X line, and pairs them with ConnectX-8 SuperNICs—800 Gb/s adapters designed for AI traffic patterns.
The company promotes the new release as ideal for agentic AI, which generates a heavy compute load of AI inference that burden enterprise platforms. Data centers can hit their upper limits as they handle this load, in terms of megawatts and overheating. Spectrum-XGS’s ability to combine data facilities can help alleviate these issues, NVIDIA claims.
While Spectrum-XGS sounds promising as a “scale across” solution to data center limits, the real test will come when the solution performs in the messy world of today’s mixed-vendor data facilities. Will it be able to navigate outages, failure domains, and cranky WANs? Also, the new release could also be a milestone for Ethernet: If Spectrum-XGS can squeeze additional throughput from the GPUs companies already own—without replacing switches—it strengthens the case for Ethernet-based AI buildouts.
NVIDIA’s bet is that data center infrastructure will look less like single campuses and more like a unified networked platform. If Spectrum-XGS performs in the real world as NVIDIA promotes it, “scale-across” may become the next chapter in the AI data center playbook.