federated learning, ai, enterprise ai

Broadcom Inc. last week identified Anthropic as the customer behind a massive chip procurement deal that has captured Wall Street’s attention since September. Broadcom CEO Hock Tan disclosed Thursday that Anthropic placed a $10 billion order for Google’s latest tensor processing units (TPUs), followed by an additional $11 billion order in the most recent quarter.

The disclosure ends months of speculation about the mystery buyer’s identity. Broadcom first announced the $10 billion deal during its September earnings call but declined to name the customer, sparking intense investor interest amid the AI infrastructure boom. Company officials had only confirmed the buyer wasn’t OpenAI, which has its own chip agreement with Broadcom.

The orders consist of Ironwood TPU racks built with Broadcom’s custom ASIC technology. The chips are designed to compete with NVIDIA Corp.’s market-leading GPUs, with some experts believing they offer superior efficiency for certain AI workloads. Tan emphasized that Broadcom is delivering complete server racks to Anthropic, not just individual chips, marking the company’s fourth major customer for its XPU custom chip platform.

Broadcom also announced a fifth custom chip customer placed a $1 billion order in the fourth quarter, though the company again withheld the buyer’s name. The $21 billion in combined orders from Anthropic underscores the extraordinary scale of hardware investment among leading AI companies and reinforces Broadcom’s position as a critical supplier in the rapidly evolving AI infrastructure market.

The disclosure also highlights Anthropic’s aggressive scaling strategy as competition intensifies in frontier AI development. The company’s hardware investments dovetail with a sweeping cloud partnership with Google announced in late October, valued in the tens of billions of dollars. That agreement grants Anthropic access to up to one million Google TPUs and is expected to deliver over one gigawatt of new compute capacity by 2026.

Anthropic employs a diversified infrastructure approach, distributing workloads across Google’s TPUs, Amazon’s Trainium chips, and NVIDIA GPUs. The multi-cloud strategy allows the company to optimize different models for training, inference, and research based on each platform’s strengths.

For Google, Anthropic’s commitment to TPUs provides crucial validation as the company positions its chips as a credible alternative to NVIDIA’s GPUs. Google Cloud executives have highlighted the price-performance advantages that drove Anthropic’s decision to expand TPU usage. After more than a decade of internal development, Google now offers TPUs as a cloud service rather than selling hardware directly.

Analysts view TPUs as increasingly important as power constraints — not chip supply — become the primary bottleneck for AI data centers. Google’s power-efficient designs could become a significant competitive advantage as global AI compute demand accelerates.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Tech Field Day Events

SHARE THIS STORY