Nutanix has released a major update to its core infrastructure software while simultaneously extending its technology roadmap with NVIDIA, supporting a strategy that links IT resilience with the growing demands of enterprise AI.
The launch of Nutanix Cloud Infrastructure (NCI) 7.5 focuses on organizations running large, distributed environments that must balance availability, security and operational control. In parallel, Nutanix is committing its integrated AI platform to upcoming NVIDIA technologies, including the Rubin GPU platform and Vera CPU architecture, as customers move from AI experimentation toward sustained production use.
More Than a Routine Refresh
The company touts the NCI 7.5 release as more than a routine refresh. The updated platform introduces higher capacity ceilings, expanded disaster recovery options and tighter automation designed to reduce manual intervention during failures or upgrades.
Among the more notable changes is support for up to 185TB of all-flash capacity per node, aimed at enterprises consolidating heavy workloads. Nutanix has also added more detailed control over virtual machine restart order in its AHV hypervisor, allowing admins to define dependency chains so multi-tier applications recover in the correct sequence after an outage or reboot.
Disaster recovery has been broadened with support for multiple replication targets from a single source, enabling organizations to mix synchronous, near-synchronous and asynchronous replication strategies. A new staging option allows IT teams to pause reverse replication after a failover, giving them time to validate data before resuming normal operations.
Security-related changes in NCI 7.5 focus on tightening controls that are often critical in regulated environments. These include support for authenticated network time services to prevent time spoofing, and expanded options for centralized key management using third-party key management servers for virtual TPMs (Trusted Platform Module).
Operationally, the release supports a continued emphasis on managing infrastructure at scale. New tooling simplifies upgrades in air-gapped environments, while deeper integration with Cisco networking allows network segmentation to be configured during initial cluster deployment.
A significant addition is support for external storage configurations, allowing Nutanix Cloud Infrastructure to run on qualified server hardware while connecting directly to Pure Storage FlashArray systems using NVMe over TCP. The option is designed for customers looking to modernize software layers without discarding existing storage investments.
Sovereign Cloud Push
Alongside NCI 7.5, Nutanix has highlighted broader enhancements to its Nutanix Cloud Platform aimed at distributed sovereign cloud use cases. These environments are increasingly relevant for organizations that need cloud-like flexibility while maintaining strict control over data residency, governance and operational boundaries.
Recent updates allow key management and governance components to run in customer-controlled, on-premises environments rather than as external services. Nutanix has also expanded the regional availability of its Nutanix Cloud Clusters offerings across public cloud providers, including additional regions in the US and Europe.
Keeping Current with NVIDIA
Running alongside the infrastructure announcements is Nutanix’s deepening partnership with NVIDIA, which centers on simplifying how enterprises deploy and operate AI infrastructure.
Nutanix’s integrated AI operating environment is built on its Acropolis Operating System and AHV hypervisor, combined with the Nutanix Kubernetes Platform and Nutanix Enterprise AI. This stack is designed to work closely with NVIDIA AI Enterprise software and NVIDIA NIM microservices, with the goal of reducing the complexity of assembling AI-ready infrastructure.
The latest announcements extend that alignment to NVIDIA’s next-generation platforms. Nutanix plans to support NVIDIA Rubin GPUs and Vera Arm-based CPUs across both bare-metal and virtualized environments. The roadmap also includes support for NVIDIA’s BlueField-4–enabled inference storage platform and Spectrum-X Ethernet Photonics switches, which is important given the growing importance of networking in large-scale AI deployments.
The emphasis, from both companies, is on treating AI infrastructure as a cohesive operating environment rather than a collection of loosely integrated components. As model sizes grow and AI workloads become more central to business operations, companies are increasingly evaluating how quickly they can provision environments, maintain governance and control costs over time.

