Red Hat Summit 2024: The Biggest News In AI, Containers And More

Updates on Lightspeed, OpenShift AI and Red Hat’s partnership with Nvidia are some of the major news made this week at Summit.

Lightspeed expansion. Red Hat OpenShift AI enhancements. And deeper partnerships with the likes of Stability AI, Oracle and Nvidia.

These are some of the biggest news items from the Raleigh, N.C.-based open source enterprise tools vendor’s Summit 2024 event held Monday through Thursday in Denver.

Red Hat, a subsidiary of IBM and soon-to-be sister company of HashiCorp, focused much of Summit on its role in the growing use of generative artificial intelligence (GenAI) and other AI tools to save on costs and improve operations.

[RELATED: Red Hat Eyes Ways To ‘100 Percent Partner-Led Drive’ In Automation, App Modernization]

Red Hat Summit 2024

As part of Red Hat’s moves to meet the AI moment, the vendor has positioned its portfolio as aiding customers with moving AI models from experimentation into production at a lower hardware cost, meeting data privacy concerns, preventing barriers between existing systems and storage platforms, achieving choice in deployment across cloud, data center and edge locations – among other benefits.

About 80 percent of overall Red Hat sales come through indirect channel and alliance relationships, according to CRN’s 2024 Channel Chiefs.

Chris Weis, director of modern data center solutions at World Wide Technology – a Maryland Heights, Mo.-based Red Hat partner and No. 9 on CRN’s 2023 Solution Provider 500 – told CRN in an interview that WWT has seen growing interest from customers in AI workloads on top of OpenShift.

For customers who understand the infrastructure and AI use cases, Red Hat and Kubernetes are an opportunity to bring those workloads to production in the future, he said.

“AI is very new to a lot of customers, but we think over the next six to 12 months, that's going to become a reality for them,” Weis said. “They're going to need to basically understand that (middle) layer much better than what they do today. So Red Hat is doing a great job there.”

WWT also continues to invest in growing its expertise in Red Hat’s Ansible and automation offerings as well as OpenShift virtualization, Weis said.

WWT’s Red Hat practice has also seen more work from customers wanting to diversify their portfolio after Broadcom’s acquisition of VMware.

Some of these customers “probably will still continue to run VMware in the future,” but “they are trying to look at alternative platforms, though – in particular, OpenShift virtualization – as a way to augment some specific workloads.”

Read on for some of the other major announcements from Red Hat Summit 2024.

Red Hat Lightspeed Expansion

During Summit, Red Hat announced plans to expand its Lightspeed AI offering across its platforms, including inside OpenShift and Red Hat Enterprise Linux, giving the platforms natural language processing capabilities and potentially easier to use for newcomers and experts alike.

Red Hat OpenShift Lightspeed availability is expected in late 2024. RHEL Lightspeed is still

“in the planning phase,” according to Red Hat. Ansible Lightspeed is generally available (GA).

Red Hat has positioned Lightspeed as a tool users can leverage at a time when hiring enough IT experts is a challenge, with the vendor calling Lightspeed “a force multiplier.”

OpenShift users can leverage Lightspeed for suggestions on how to use autoscaling, for example. RHEL users can leverage Lightspeed to answer questions and schedule patching for future production maintenance windows through simple commands, according to Red Hat.

OpenShift AI Enhancements

Red Hat’s OpenShift AI version 2.9 brings a host of new features, including a technology preview for model serving at the edge, improved model development and model monitoring visualizations.

The offering’s extension of AI model deployment to edge and remote locations with single-node OpenShift allows for inferencing in environments with constrained resources and intermittent – or even air-gapped – network access, according to Red Hat. This feature allows for out-of-the-box observability and aims to provide a consistent operational experience across core, cloud and edge.

OpenShift AI users gained the ability to use multiple model servers for predictive and GenAI on one platform. Red Hat described this as simplifying operations and lowering costs. This feature supports KServe, vLLM, Caikit-nlp-tgis runtime and more. Red Hat promises out-of-the-box large language model (LLM) serving with this feature.

Red Hat OpenShift AI now has additional workbench images for toolkit flexibility and new accelerator profiles for admins to configure various hardware for model development and serving workflows.

And OpenShift AI has been updated to allow for distributed workloads with the Ray framework. Users can employ CodeFlare and KubeRay for improved data processing and model training, with central queuing and management capabilities for better node use and better allocation of graphics processing units (GPUs) and other resources, according to Red Hat.

Podman AI Lab

Red Hat has created a Podman AI Lab extension for the open source Podman Desktop project founded at the vendor, bringing a graphical interface for local workstations used for building, testing and running GenAI-powered apps in containers.

Red Hat pitches the AI Lab as enabling a cost effective and simple local developer experience while maintaining sensitive data ownership and control. The AI Lab has a recipe catalog with sample apps to speed up the creation of chabots, text summarizers, code generators and other LLM use cases.

The AI Lab enables developers to provision local inference servers and test model behavior in a playground environment, where developers can explore capabilities and accuracy of various models, according to Red Hat.

The Podman Desktop project has more than one million downloads, according to Red Hat.

RHEL Image Mode For Partners

RHEL 9.4 has a technology preview for a new image mode deployment method, with a group of independent software vendors (ISVs), original equipment manufacturers (OEMs) and hardware vendors evaluating the mode.

The new mode promises a better pathway for testing and deploying RHEL-certified apps by making containers the platform’s building blocks and allowing individual app components to be updated with singular containers instead of using monolithic updates and traditional patches.

The result, according to Red Hat, is partners moving faster and gaining greater control, with targeted updates based on app or appliance and image-based updates and rollback for edge deployments among the uses, according to Red Hat.

All RHEL-certified hardware is supported by image mode.

Policy As Code Preview

Red Hat partners can now start discussing integrations and join advocacy groups to weigh in on automated policy as code for Ansible Automation Platform, which enters tech preview “in the coming months.”

Automated policy as code aims to enforce compliance across hybrid cloud estates with varied AI apps, better preparing organizations for scaling AI workloads and sprawling infrastructure, according to Red Hat.

The feature should even help users set predetermined limits on AI sprawl and infuse governance into learning models from the start, reducing the potential for human error.

The vendor envisions police as code assisting users with mission-critical systems compliance mandates before those systems become AI centric. Policy as code will also help users align environments and resources live as components become out of policy. Policy as code also allows for automated audit reporting.

Konveyor GenAI

Later this summer, Red Hat-led open source app modernization project Konveyor receives LLM integration to aid with re-platforming and refactoring apps to Kubernetes and cloud-native technologies.

Future releases of Red Hat’s migration toolkit for apps will also receive these capabilities, according to the vendor.

LLMs in Konveyor should allow for recommended source code changes within the integrated development environment (IDE), retrieval augmented generation (RAG) for leveraging the structured migration data from Konveyor and expedited development cycles with less human error, among other use cases, according to Red Hat.

Now in developer preview – with GA expected in the second half of 2024 – is Red Hat Connectivity Link, a platform for hybrid, multi-cloud app connectivity to improve performance, scalability and security.

Connectivity Link, based on the open source Kuadrant project, brings together multi-cluster ingress management, global load balancing, programmatic app and workload movement, application programming interface (API) protection and more to ease the configuring and management work of platform engineers.

One of the promises of Connectivity Link is saving users from bringing together separate tools for setting up environments, running them and managing them, according to Red Hat.

AI Integrations

Red Hat revealed multiple new collaborations and integrations with a variety of leading AI players during Summit.

Run:ai and Red Hat have agreed to a collaboration to add Run:ai resource allocation capabilities to OpenShift AI. Run:ai’s certified OpenShift Operator is available with future integration capabilities planned for the future.

With Stability AI, Red Hat has a deal to integrate open GenAI models within Red Hat OpenShift AI to make open source LLMs more easily accessible for users.

Red Hat and Elastic have expanded their work together to support RAG with Elasticsearch as a preferred vector database offering integrated on Red Hat OpenShift AI.

Cloud, Virtualization Partnerships

During Summit, Red Hat showcased deeper partnerships with Pure Storage and Oracle in the area of virtualiztion.

As part of the Pure Storage relationship, the two vendors revealed an optimization of Pure Storage’s Portworx on Red Hat OpenShift to give users one platform for deploying, scaling and managing modern applications, with one control plane for virtual machines (VMs) and containers.

With Oracle, Red Hat has reached GA for OpenShift on Oracle Cloud Infrastructure (OCI) Compute VMs, allowing users to leverage OCI’s 69 cloud regions to run OpenShift in the preferred location and operations model, according to Red Hat.

OpenShift 4.14 Support Expended

Red Hat has made a 12-month extended update support (EUS) term available as an add-on subscription for OpenShift 4.14, taking the OpenShift lifecycle to three years.

The additional term puts 4.14 support through Oct. 31, 2026, if a user wants, according to Red Hat. Six-month EUS terms have already been available.

The next EUS release will come with OpenShift 4.16. Support service level agreements (SLAs) will be inherited from the base offering purchased, Premium or Standard, according to Red Hat.

Chipmaker Partnerships

During Summit, Red Hat revealed a series of integrations with leading chipmakers.

With Intel, Red Hat announced a collaboration to power enterprise AI on OpenShift AI, facilitating end-to-end AI products on Intel Gaudi AI accelerators, Intel Xeon processors, Intel Core Ultra and Core processors, and Intel Arc GPUs, among other Intel offerings.

Nvidia and Red Hat are working on integration support for Nvidia NIM microservices on Red Hat OpenShift AI for better inferencing for dozens of AI models.

Red Hat and Advanced Micro Devices (AMD) now have a development preview for AMD GPU Operators on Red Hat OpenShift clusters, which can provide the processing power and performance for AI workloads across hybrid cloud.

In the second half of 2024, semiconductor products supplier Renesas Electronics and Red Hat plan to provide early access to refined versions of a new open, flexible compute platform for software-defined vehicles (SDVs) – with a a Renesas reference platform for rapid prototyping.

And Red Hat is collaborating with Qualcomm to deliver a pre-integrated platform to run on Red Hat In-Vehicle OS for virtual testing and deployment in SDVs.