
Responding to a data center environment challenged by AI’s energy needs, Texas Instruments has unveiled a portfolio of power-management devices designed to handle the massive power requirements of AI workloads, and to help hyperscalers make the leap from today’s 48-volt infrastructure to a future built on 800 VDC.
The announcement, showcased at this week’s Open Compute Project Summit in San Jose, is clearly a move by TI to become an enabler of scalable AI infrastructure. Working in collaboration with NVIDIA and other partners, the company is targeting a non-glamorous yet critical challenge of AI’s growth curve: how to deliver clean, efficient power from the grid to the GPU gate.
“Data centers are evolving from simple server rooms to sophisticated power infrastructure hubs,” said Chris Suchoski, TI’s data-center sector GM. “Scalable, efficient power systems are the foundation that allows AI innovation to move forward.”
A New Era of Power Density
Today’s data centers are voracious energy consumers. Looking ahead just a few years, it’s likely the average IT rack could consume more than one megawatt of power, far beyond the limits of current 12- or 48-volt designs. Higher voltages allow more efficient transmission and smaller cable sizes, but they also raise new engineering hurdles around safety, conversion ratios, and heat.
Texas Instruments’ latest reference architectures and modules aim squarely at these challenges. TI is collaborating with NVIDIA to build power management devices to support the expansion to 800 VDC power architecture. Its new 30 kW AI-server power-supply design includes a three-phase, three-level capacitor power factor correction converter, which has a power supply configurable as a single 800V output or separate output supplies. Another module integrates two inductors with trans-inductor voltage regulation, helping engineers boost density without losing thermal reliability.
Additionally, TI introduced a GaN converter module rated at 1.6 kW and more than 97% conversion efficiency. It’s a reminder that incremental gains in efficiency translate into huge savings at scale: every 1% improvement across thousands of racks can cut megawatts from a facility’s cooling load.
From 48 Volts to 800 Volts
Beyond component innovation, these new power management solutions from TI are addressing the shift to 800 V DC, which isn’t a simple swap-out—it changes the entire power delivery infrastructure. The company’s technical brief explores several architectures (three-stage, two-stage, and series-stacked) weighing trade-offs between efficiency, cost, and board space.
In a typical design, an 800-volt bus feeds an intermediate converter that steps down to 50 volts, then to 12.5 volts or 6 volts at the point of load. Each conversion adds a small loss, but higher intermediate voltages enable faster switching and smaller components, improving transient response for power-hungry AI accelerators. TI’s analysis suggests that with careful optimization, overall peak efficiency can approach 89% from input to core rail, a significant advance given the multi-kilowatt scales involved.
Collaboration with NVIDIA
TI’s partnership with NVIDIA underscores how closely semiconductor vendors and system builders now cooperate to meet AI’s power challenge. GPUs optimized for large-language-model training can draw hundreds of amps each. Coordinating the silicon that feeds them requires joint engineering across component boundaries.
With its new 800-volt solutions, Texas Instruments is working to position itself as a player in building the electrical backbone of the AI era, from the substation to the GPU socket. If successful, the company’s approach could redefine not just how much power data centers consume, but how efficiently that power is delivered.