
Open source leader Red Hat has released upgrades to its artificial intelligence (AI) portfolio geared to enhance the development of generative AI tools in hybrid cloud deployments.
The updates build on the company’s strategy of offering an enterprise AI platform that enables flexible deployment anywhere in today’s increasingly complex cloud environments.
The bundle seeks to address two key challenges of generative AI. Supporting GenAI uses cases is highly expensive, and to truly leverage the technology’s ability to create content, companies need to include their own proprietary data into an AI model.
“Red Hat knows that enterprises will need ways to manage the rising cost of their generative AI deployments, as they bring more use cases to production and run at scale,” said Joe Fernandes, VP and general manager of AI business unit at Red Hat. “They also need to address the challenge of integrating AI models with private enterprise data and be able to deploy these models wherever their data may live.”
To support this goal, the updates improve functionality for Red Hat OpenShift AI, the company’s platform for deploying GenAI, managing machine learning operation (MLOps) and large language model (LLM) deployments.
The new release, version 2.18, includes distributed serving, which enables data center admins to split model serving across a large array of graphics processing units (GPUs) in hybrid cloud or other complex environments. This distributed computing greatly accelerates AI model training and fine-tuning.
Another new feature supports end-to-end model tuning, which allows LLMs to be more efficient, auditable and scalable in enterprise production environments. Scalability in particular is a key feature of a successful cloud deployment. This feature uses the Red Hat OpenShift AI data science pipelines and allows single pane management via the company’s AI dashboard tool.
Enterprises are increasingly calling for AI guardrails as generative AI is deployed in high profile use cases for which inappropriate content causes major business repercussions. With version 2.18, OpenShift now includes AI guardrails, which increases LLM performance and transparency by providing admins with detection points to find and delete abusive or hateful speech, personally identifiable information (PPI) or any data that might compromise a firm’s competitiveness.
On a similar note, also now included in OpenShift is an LLM evaluation tool that provides data about the model’s performance. This monitoring tools offers benchmarked metrics for mathematical and logical reasoning. It also enables adversarial natural language, which is an AI development technique in which developers intentionally create misleading text prompts to see if a model can be coerced into providing inaccurate responses. This is an effective way to find vulnerabilities in NLP software. Consequently, Red Hat’s new LLM evaluation tool is equipped to help build AI models that are fine-tuned for greater accuracy.
These OpenShift AI version 2.18 upgrades build upon a series of improvements made to the company’s Red Hat Enterprise Linux (RHEL) that were released in late February. Most significantly, OpenShift sports a new graphical user interface (GUI) for skills and knowledge preview, which streamlines data input and speeds the process by which enterprise users contribute to a growing AI model.
Also new is support for the open-source Granite 3.1 8B model. Granite enables multilingual support for taxonomy customization and inference, and includes a context window for retrieval augmented generation (RAG) development.
To help educate staffers about AI, Red Hat’s AI Foundations provides free online training classes, including two AI learning certificates that are applicable to both beginners and senior management. The goal is to help users understand how AI can be deployed to elevate business workflow, assist with decision making and support greater innovation.