Broadcom has unveiled Thor Ultra, an 800G Ethernet network interface card (NIC) built expressly for the back-end networks that connect modern AI clusters. The key point here is Broadcom’s focus on open standards as a competitive edge: Thor Ultra’s specs reflect the company’s long-held strategy that as enterprises build the infrastructure to scale AI models, the winning interconnect will be fast, loss-tolerant, and open.

Thor Ultra is Broadcom’s first NIC designed from the start for scale-out AI fabrics, the rack-to-rack domain where congestion spikes and microbursts are common place. Rather than chasing every possible offload, Broadcom focused the card on four pressure points that have dogged RDMA-era designs: single-path forwarding, in-order delivery, blunt retransmission, and rigid congestion control. The net effect is to keep the fabric busy and the XPUs fed with data even when traffic gets congested.

“Thor Ultra delivers on the vision of Ultra Ethernet Consortium for modernizing RDMA for large AI clusters,” said Ram Velaga, senior vice president and general manager of the Core Switching Group at Broadcom. “Thor Ultra is the industry’s first 800G Ethernet NIC and is fully feature compliant with UEC specification.”

Performance and Power Usage

The new NIC card achieves 800G aggregate via PCIe Gen6 x16 and ships in both 100G and 200G PAM4 SerDes variants, accommodating today’s 100G optics while leaving a path to 200G lanes as AI workloads call for it. Broadcom points to very low BER SerDes (Bit Error Rate Serializer/Deserializer) to improve job completion stability.

On the security side, Thor Ultra adds line-rate encryption/decryption using PSP offload, secure boot with signed firmware, and attestation, which are features aimed at multi-tenant clusters and stricter compliance regulations.

For performance, Broadcom projects about 15% faster task performance time from the combination of selective retransmit and fine-grained load balancing. In clusters where networking can account for a low-teens percentage of total system cost, those gains matter.

The company also touts a power usage advantage. Thor Ultra targets roughly 50W, well below the profile of multifunction DPUs that carry general-purpose cores and deep packet inspection stacks. By stripping the NIC to what back-end AI networks truly need, Broadcom is shifting the industry toward simpler cards that still accelerate the critical path.

Many Ways to Deploy

Deployment flexibility matters to hyperscalers, and Broadcom leans in there. Thor Ultra is sampling now in standard PCIe CEM and OCP 3.0 cards, with options to purchase the die for custom boards, integrate it as a chiplet alongside an XPU, or even license the IP. That range mirrors the varied ways AI systems are being assembled, with some as off-the-shelf servers, some as custom set-ups tuned to a cloud’s internal design rules.

For Broadcom, the strategy is that providing an 800G, UEC-compliant NIC with load-balanced paths and programmable congestion logic is enough to tilt the contest toward Ethernet. For enterprises building clusters measured in the tens of thousands of GPUs, an open, standards-driven fabric that shaves completion time without inflating power or complexity is a compelling proposition. Thor Ultra doesn’t end the interconnect debate, but it makes Ethernet a sharper instrument for AI at hyperscale.