Covering Scientific & Technical AI | Monday, October 7, 2024

Red Hat OpenShift AI Announces Integration Support for NVIDIA NIM Microservices 

Red Hat and NVIDIA today announced integration support for NVIDIA NIM microservices on Red Hat OpenShift AI. OpenShiftAI is a flexible, scalable MLOps Kubernetes platform with tools to build, deploy, and manage AI-enabled applications. NVIDIA NIM, on the other hand, are a set of microservices designed to accelerate the deployment of generative AI applications on a trusted open hybrid cloud platform.

Put together, NVIDIA NIMs on the Red Hat OpenShiftAI platform will enable optimized inferencing for dozens of AI models. Organizations will be able to increase productivity with generative AI capabilities like expanding customer service with virtual assistants, case summarization for IT tickets, and accelerating business operations with domain-specific copilots.

“In this collaboration with NVIDIA, Red Hat is hyper-focused on breaking down the barriers and complexities associated with rapidly building, managing and deploying gen AI-enabled applications,” said Chris Wright, Chief Technology Officer and Senior Vice President of Global Engineering at Red Hat. “Red Hat OpenShift AI provides a scalable, flexible foundation to extend the reach of NIM microservices, empowering developers with pre-built containers and industry-standard APIs, all powered by open source innovation.”

This integration will open up a large amount of flexibility and versatility for organizations looking to use AI tools in their business. NVIDIA NIM microservices are designed to increase the efficiency of generative AI deployments by supporting a wide range of AI models, including open-source community models, NVIDIA AI Foundation models, as well as custom models.

“The beauty of what we're doing is that we are going to be offering Nvidia NIM from within OpenShift AI so customers will have the ability from within OpenShift AI to easily deploy,” said Steven Huels, Red Hat Vice President and General Manager of the AI Business Unit, during a press pre-briefing. “So, click a couple of buttons, and you can select from any of the Nvidia NIMs, deploy them into your OpenShift AI footprint, and from that interface scale and manage them alongside any other intelligent applications or models you're running. The beauty of that is you get the power of all of the performance enhancements that Nvidia has put into those NIMs.”

The news comes at a busy time for Red Hat, which is currently hosting the Red Hat Summit in Denver, Colorado, between May 6-9, 2024. A premier event for the open source IT community, the Red Hat Summit will help attendees learn, collaborate, and innovate in both hands-on labs as well as informative speeches.

The integration between OpenShiftAI and NVIDIA NIM provides a litany of potential benefits for organizations. To begin, OpenShiftAI now offers a streamlined path to integration to deploy NVIDIA NIM in a common workflow alongside other AI deployments. This enables greater consistency between deployments as well as easier overall management.

This partnership enables integrated scaling and monitoring for NVIDIA NIM deployments in coordination with other AI model deployments across hybrid cloud environments. Additionally, NVIDIA and Red Hat’s collaboration will allow for enterprise-grade security, support, and stability to ensure a smooth transition from prototype to production for organizations that rely on AI.

Huels expressed his excitement for this integration during the press pre-briefing by elaborating on how many great ideas he’s seen shelved over the years due to production complexities.

“I will have been doing this for 24 years and in my early part of my career. I saw a lot of great ideas never get put into production because of the complexities that were involved in AI,” said Huels. “I think the best thing that happened with the generative AI explosion was this expectation that the consumption was simple enough that now it can be a wide technology.”

AIwire