Hewlett Packard Enterprise used its Discover Barcelona conference this week to answer a lingering question about its $14 billion Juniper Networks acquisition: how quickly can it turn overlapping assets into a coherent, AI-focused networking strategy?

The answer, at least in this first phase, is: faster than many expected. HPE Networking chief Rami Rahim outlined a dual-track roadmap that blends Juniper’s Mist cloud platform with Aruba Central, couples them to new Ethernet hardware for GPU fabrics, and ties everything back into HPE’s GreenLake and OpsRamp AIOps stack.

Dual Platform Design

At the control-plane level, HPE is promising what it calls a dual platform design. A forthcoming Wi-Fi 7 access point line will be able to register with either Mist or Aruba Central, with customers able to change their management platform without swapping hardware. That message of investment protection is critical for enterprises that have already standardized on one side or the other of the now-combined portfolio.

Behind that hardware flexibility is a two-way exchange of AIOps features. Juniper’s Mist Large Experience Model (LEM), which is an engine trained on billions of telemetry points from applications like Zoom and Teams, is being brought into Aruba Central, along with Mist’s well-known incident automation.

In the opposite direction, Aruba’s AI-based client profiling and organizational insights are being wired into Mist. HPE is also extending Aruba’s agentic mesh reasoning engine to Mist, with the goal of improving anomaly detection and automated root-cause analysis across both platforms.

What makes this ambitious is that Mist and Central were built with different histories and deployment models. Mist is a cloud-native SaaS design, while Aruba Central has evolved to support cloud, on-premises, and VPC environments and recently underwent a substantial microservices rebuild. HPE executives argue that the underlying architectures are now close enough that features can be “built once and deployed twice,” and the early integrations suggest the engineering teams have at least cleared the first hurdles.

AI Factory

These products fit into a broader AI factory narrative HPE is building with partners NVIDIA and AMD. HPE is wrapping Juniper routing into long-haul data center interconnects and edge on-ramps for NVIDIA-powered AI fabrics, while also collaborating with AMD on the Helios rack-scale architecture. That Helios design leans on standards-based Ethernet (not InfiniBand) for scale-up GPU connectivity inside the rack, a clear signal that HPE hopes that Ethernet will erode InfiniBand’s historic advantage in AI clusters over time.

The control and management story is equally important. With its latest OpsRamp and GreenLake Intelligence updates, HPE is trying to give IT operations a single view from compute through networking to public cloud. Telemetry from Compute Ops Management, Aruba Central, and Juniper Apstra is being fed into a shared data model, with OpsRamp becoming the hybrid command center console.

New agentic AIOps features, including Model Context Protocol support, are meant to let third-party AI agents plug in without custom integration work, and to give those agents fuller context about rapidly changing environments.

Head to Head with Cisco?

Whether all of this is enough to unseat Cisco from its long-held networking lead is an open question. It won’t be easy. Cisco is busy converging its own Meraki and Catalyst worlds and pushing its version of AI-assisted networking.

But Barcelona made two things obvious. First, HPE has moved quickly to present a unified Juniper–Aruba story rather than a prolonged coexistence of parallel stacks. Second, it is anchoring that story firmly in AI, both AI for the network via AIOps and networking for AI via Ethernet fabrics tuned for GPU clusters.

In a market where AI workloads are reshaping everything from switch silicon to management consoles, that tight linkage between AI operations and AI infrastructure may prove to be HPE’s most strategic shift.