
The data center semiconductor market is experiencing unprecedented expansion, with the global compute revenue projected to surge nearly ninefold, to $546 billion in 2029 from $62 billion in 2022, according to a Futurum Research report released Wednesday.
But despite mounting interest in specialized artificial intelligence (AI) chips, graphics processing units (GPUs) remain the undisputed leader in data center investments.
GPUs currently command approximately 75% of total compute spending in 2025, dwarfing both traditional CPUs at 12% and emerging XPUs — custom AI accelerators including Google’s TPU, Amazon.com Inc.’s Trainium, and Meta Platforms Inc.’s MTIA — at 13%.
However, the landscape is shifting.
XPUs are projected to see the fastest growth rate among all processor categories, with an estimated 23% annual increase in compute spending in 2026. While this trails the 29% growth projected for GPUs, it significantly outpaces the 12% growth forecast for CPUs, signaling a fundamental transformation in how companies approach AI infrastructure.
“While data center operators continue to rely heavily on GPUs from companies such as NVIDIA and AMD, it’s increasingly evident that the adoption of XPUs is accelerating,” Ray Wang, research director for semiconductors, supply chain, and emerging technology at Futurum, said in a statement. “This trend does not imply that XPUs will replace GPUs. Rather, as overall compute demand continues to expand rapidly, the total addressable market for compute is rising, creating room for both architectures to thrive.”
The GPU market alone is expected to expand from $13 billion in 2022 to $385 billion by 2029, fueled by AI workloads that increasingly depend on GPU acceleration. NVIDIA Corp. and AMD Inc. continue to lead the segment.
Meanwhile, XPUs are carving out their own substantial market, projected to grow from $15.5 billion to $84 billion over the same period. Traditional CPUs, while growing from $33.7 billion to $76.6 billion, are seeing their relative market share decline as compute demand pivots decisively toward accelerators.
The research, which includes Futurum’s 2H 2025 Data Center Semiconductor Decision Maker Survey and Q2 2025 Data Center Semiconductor Market Report, reveals AI workloads are becoming increasingly diversified beyond pure model training. Balanced training and inference workloads now lead at 38%, followed by mostly inference at 33%, training-dominant operations at 19%, and data preparation and ETL-heavy tasks at 10%.
When it comes to purchase decisions, speed remains king. Time-to-train emerged as the top driver of compute purchases at 31%, followed by cost efficiency measured in dollars per TFLOP or tokens per second per dollar at 22%, and power efficiency at 16%. Total cost of ownership, networking capabilities, and sustainability considerations ranked lower at 13%, 11%, and 7% respectively.
By 2025, the total data center compute market is expected to reach $232.4 billion, with GPUs accounting for $174.7 billion, XPUs contributing $30.9 billion, and CPUs adding $26.8 billion. The commitment among hyperscalers and enterprises to integrate custom AI accelerators into their compute infrastructure appears strong and growing, particularly as those companies seek to optimize performance and reduce dependency on third-party GPU suppliers.

