
At its Insight 2025 conference in Las Vegas, NetApp presented a strategy clearly targeted for the AI era: make enterprise data AI-ready, then keep it fast, governed, and recoverable. The company unveiled AFX, a disaggregated all-flash platform built expressly for AI pipelines, and a companion AI Data Engine that sits alongside storage to curate, guard, and synchronize data for generative and retrieval-augmented applications.
A same-day announcement with Cisco adds the networking muscle—400G Nexus switches—for the east-west traffic these workloads demand, with FlexPod AI integration on the roadmap.
Independent Scaling
AFX is NetApp’s first large-scale separation of performance and capacity within its ONTAP data management software platform. Controllers and NVMe flash shelves scale independently over 400G Ethernet, forming a single pool that can be dialed toward throughput-hungry training runs or broader, lower-intensity inference estates.
NetApp positions this as the foundation for AI factories (a term used quite a bit these days) with certification for NVIDIA DGX SuperPOD and support for RTX Pro servers using Blackwell Server Edition GPUs. For customers, the message is simpler: add controllers when you need IOPS, or add enclosures when you need terabytes.
A notable addition is the DX50 data compute node, which introduces GPU-accelerated processing directly into NetApp’s storage fabric. Instead of hosting general virtual machines, the DX50 is designed to offload data-adjacent tasks such as metadata analysis, vectorization, and policy enforcement. The result is a reduction in redundant data copies and faster synchronization between source updates and the AI systems that rely on that data.
AI Data Engine, Security, and an Alliance
The AI Data Engine bundles four key features that map to common pain points in enterprise AI. It offers a data curator, which supports real time data and manages vector stores natively on the platform. The Engine’s data guardrails enforce policy at the storage layer, marking sensitive content and automating redaction or exclusion. The data sync feature automates change detection and replication so models and RAG indexes stay current. And the metadata engine builds a live catalog across silos so teams can find and authorize the right datasets without manual wrangling.
On the security front, NetApp rebranded and expanded its ransomware suite as Ransomware Resilience, adding breach detection that looks for anomalous reads (rapid directory traversals, unusual user patterns) and pushes real-time alerts into a customer’s SIEM. Analysts note that performing both ransomware and exfiltration analytics in the storage layer remains rare, and positions NetApp as a defender as AI expands the attack surface.
Developing partnerships, the Cisco alliance formalizes the network piece enterprises have been stitching together on their own. By integrating Nexus 400G switching into AFX clusters, the vendors offer a full-stack pitch: storage tuned for AI, lossless low-latency fabrics, and centralized management through Cisco Intersight.
Efficient Support of AI
Taken together, AFX and the AI Data Engine are far more than a product refresh. They’re an ambitious bid to pull data preparation, policy, and search into the storage plane so AI teams spend less time plumbing and more time shipping features, even as security admins retain the controls they need.
For enterprises that have been slowed by complex systems, NetApp’s strategy appears direct: make storage the place where AI data is organized, guarded, and kept fresh—and wire it to the GPU floor with 400G network switches.