Dell, AI, AIOps, virtana, tech, AI, productivity, operations, Generative AI, maintenance, AIOps, ITSM, IT service management, AI, artificial intelligence

Dell Technologies today added a slew of capabilities that promise to make its simpler and more efficient to run artificial intelligence (AI) workloads in on-premises IT environments.

Announced at the Dell Technologies World conference, Dell also revealed it is adding support for AMD Instinct MI350 series graphics processor units (GPUs) to its Dell PowerEdge XE9785 and XE9785L servers in addition to support for both the latest Blackwell series of GPUs and a forthcoming Vera CPU series from NVIDIA and Intel Gaudi 3 AI accelerators.

Looking for a way to reduce the amount of energy required to run these and other classes of data-intensive applications, Dell unveiled the Dell PowerCool Enclosed Rear Door Heat Exchanger (eRDHx) that captures 100% of the heat generated by a server within a self-contained airflow system.

Varun Chhabra, senior vice president for infrastructure (ISG) and telecom marketing at Dell, said that the capability should reduce cooling costs by as much as 60% when deployed using Dell IR7000 series racks.

Additionally, the IR7000 racks will enable air cooling capacity up to 80 kW per rack while enabling IT teams to use warmer water temperatures to cool racks in a way that eliminates the need for more expensive chillers. Collectively, those capabilities will enable IT teams to deploy 16% more racks without increasing power consumption.

Dell is also extending its reach into the realms of storage and data lakehouse. A Dell Project Lightning initiative promises to create a parallel file system that provides up to two times greater throughput than competing approaches. An existing Dell Data Lakehouse, meanwhile, has been enhanced to make it simpler to create and query datasets.

Finally, Dell also extended its AI ecosystem to include support for platforms from Cohere North, Google, Mistral AI, Meta and Glean.

In general, Dell is making a case for running AI workloads at less total cost in an on-premises IT environment. Rather than running those workloads in a cloud computing environment where each input and output token generated increases costs, an on-premises IT environment provides a pool of IT infrastructure resources that an IT team can allocate more efficiently, says Chhabra. Dell claims the Dell AI Factory approach can be up to 62% more cost-effective for running the inference engines that drive large language models (LLMs) in a production environment.

Itโ€™s not clear which teams within organizations are managing IT infrastructure for AI applications. In some instances, a data science team includes a set of specialists for managing AI infrastructure. In other instances, the same IT teams that have historically managed IT infrastructure are also assuming responsibility for these classes of servers. Regardless of approach, Dell claims there are now more than 3,000 organizations that have adopted Dell AI Factory.

As AI continues to become more widely adopted, itโ€™s clear IT teams need to become savvier about where AI inference engines are deployed, said Chhabra. Not every AI model, for example, requires the most advanced GPU available when it might be more cost-effectively run on a less expensive GPU or CPU, he added. The AMD Instinct MI350 series GPUs, configured with 288 GB of HBM3E memory per GPU, can, for example, provide up to 35 times greater inferencing performance.

As part of providing the option, Dell is also now providing access to up to 200G of storage networking and an upgraded instance of the AMD ROCm software stack for building and deploying AI applications.

Similarly, Dell is also now providing access to the NVIDIA AI Enterprise software platform on its Dell AI Factory platforms.

Ultimately, each IT organization will determine how and where to run AI workloads that are already starting to proliferate across the enterprise. The one certain thing is that building custom AI infrastructure to run those applications is not likely to be the first choice for many of them.