
Hitachi Vantara has added a higher end edition to its Virtual Storage Platform One (VSP One) flash memory platform that is designed to meet the demands of higher performance transaction processing and analytics applications.
Jay Subramanian, general manager for core storage platforms at Hitachi Vantara, said VSP One Block High End is an NVMe-based storage system that also supports file- and object-based storage. This edition of VSP One provides access to 346TB per rack unit (RU), with 50 million IOPS enabled via hardware compression acceleration. It can scale up to 12 controllers spanning 288 solid state drives (SSDs) that each provides access to up to 60TB of storage.
Hitachi Vantara claims the platform also provides eight nines of availability, while a Hitachi cyber resilience guarantee ensures near-zero data loss and rapid recovery.
In general, there is a greater need to converge storage platforms as application workloads continue to evolve, said Subramanian. In addition to reducing the total cost of storage, more organizations are running artificial intelligence (AI) workloads that access unstructured and semi-structured data alongside data stored in relational databases. The VSP One platforms are designed to provide access to that data via an all-Flash platform to provide the levels of sub-millisecond performance required in a way that can be centrally managed, added Subramanian.
That performance requirement will become especially acute as more organizations deploy latency-sensitive AI agents that will need to access massive amounts of data in on-premises IT environments, he added.
Additionally, many of those organizations will be looking for storage platforms that consume energy as efficiently as possible as AI agents running on graphical processor units (GPUs) start to proliferate across enterprise IT environments, noted Subramanian.
It’s not clear to what degree organizations will be revisiting their data and storage management strategies in the months ahead. A recent Futurum Group report noted there has already been a 14.5% increase in data infrastructure sales in 2025.
The challenge, as always, is that adding additional storage quickly becomes expensive. While the cost per terabyte of storage has dropped over the years, the volume of data being generated results in storage consuming a higher percentage of the overall IT budget at the expense of other priorities.
Ultimately, IT teams will need to pay closer attention to the level of I/O performance required as various classes of applications access data. There has, as of late, been more interest in running AI workloads in on-premises IT environments to ensure organizations retain control of their data, but most organizations still need to make significant investments in IT infrastructure to accommodate those workloads.
At the same time, IT teams will need to be able to attract and retain data engineers that have the expertise required to optimally ensure that the right data shows up in the right place at the right time.
In the meantime, the overall amount of data being generated is only going to increase and, regardless of where that data is stored, the challenges associated with optimally managing it all are only going to increase.


