
AMD offered a clear message with its 2025 Advancing AI event: We’re a leading player in the enterprise AI market and are well-equipped to compete with NVIDIA. To support its message, AMD promoted an AI strategy that includes its Instinct MI350 series accelerators and a rack-scale AI infrastructure platform that it claimed will offer major strides in compute efficiency.
Central to AMD’s competitive strategy is a focus on open standards and an interoperative approach, offering a clear contrast to NVIDIA, which uses a mix of open source and a proprietary approach. “We are entering the next phase of AI, driven by open standards, shared innovation and AMD’s expanding leadership across a broad ecosystem of hardware and software partners who are collaborating to define the future of AI,” said AMD CEO Lisa Su.
As part of its focus on open standards, AMD unveiled the newest version of its open source AI software, ROCm 7, which is designed to support the compute-intensive demands of generative AI and other high-performance enterprise deployments. The company claims ROCm 7 uses a “developer first” approach that makes life easier for coders, including a menu of new dev tools and libraries to speed AI development.
AMD also provided a demo of its open-standards rack-scale AI infrastructure, which is available with the company’s Instinct MI350 Series AI accelerator chips, a data center GPU that is set for general availability later this year. The MI350 chips are built on the 4th Gen AMD CDNA design and offer 288GB of HBM3E memory with an impressive 8TB/s bandwidth, and are geared for AI training and inference.
The Instinct MI350 series, according to AMD, surpassed the company’s five-year goal to boost energy efficiency by 30x, claiming the chips offer a 38x boost.
On a related note, AMD has announced a 2030 goal to increase rack-scale energy efficiency by 20x from a 2024 base. If the company meets this goal, that means that training an AI model that currently needs 275 racks could be trained with a single full-power rack by 2030—and use 95% less electricity.
Central to AMD’s strategy is a product it calls Helios, a fully integrated AI rack platform scheduled for release in 2026, which appears to be AMD’s most direct competitive move toward NVIDIA. The company touts Helios as specially designed with silicon and software to offer an AI rack capable of supporting advanced, full-scale AI training and distributed inference. Helios will scale across 72 GPUS with support from Ultra Accelerator Link (UALink), an open standard that enables low-latency connection between AI accelerators. UALink connects all the system’s GPUs into one unified system, greatly boosting performance.
AMD CEO Su, noting that Helios is a rack system that functions as a “single massive compute engine,” also compared it to NVIDIA’s Vera Rubin rack, which is slated for release in the latter half of 2026.
Driving Helios will be AMD’s new MI400 series chips, set for release next year. AMD claims these semiconductors are expected to offer as much as 10x the performance of their previous generation. When AMD’s Su announced the new chips on stage at the Advancing AI event, she was accompanied onstage by OpenAI CEO Sam Altman, who offered great enthusiasm about the upcoming chips. “When you first started telling me about the specs, I was like, there’s no way, that just sounds totally crazy,” Altman said. “It’s gonna be an amazing thing.”