Powering AI With Confidence: How Red Hat Delivers Consistency From Edge To Cloud
Artificial intelligence is rapidly moving from experimentation to enterprise-wide deployment, pushing organizations to rethink how they build, govern and scale AI across increasingly complex environments. As enterprises balance innovation with security, privacy and cost control, the need for a consistent and trusted AI foundation has never been greater. From air-gapped systems to sovereign and public clouds, responsible AI adoption now depends on flexibility without sacrificing control.
Red Hat is redefining enterprise AI with an open hybrid cloud approach built for control, consistency and privacy. In this CRNtv interview, Ryan King, VP of AI Ecosystem and Infrastructure Partners at Red Hat, breaks down what’s new in the company’s AI strategy, and how open architectures, scalable infrastructure and emerging agentic capabilities are helping organizations confidently accelerate enterprise AI.
Jon: Would you be able to talk a little bit about what's new at Red Hat as it relates to AI and your current engagement with the ecosystem?
Ryan: My passion, and Red Hat's core, is what we do: We help make Linux enterprise, we help make Kubernetes enterprise and hybrid. That has been our mission.
What's really exciting for me is moving from providing AI capabilities—which we've been doing for about 10 years, to returning to what people recognize us for: delivering a hardened platform that gives everyone choice about where they want to run it, and really focusing on how to run this most efficiently at scale, with control.
That's a big part of what our customers are looking for. The shift from simply releasing products to the enthusiasm of customers and partners, who are excited about Red Hat doing this the enterprise way, is significant.
What sets us apart is our investment in the platform and doing what we do best. We're hearing from customers and the ecosystem that they are getting behind us and saying, 'This is amazing, the capabilities you're providing, how you're packaging it, and how you're operating at our level.' We're offering the best way to run AI in the enterprise, in a hybrid way: any model, any choice, any cloud, and accelerator—those are all capabilities we provide.
For me personally, it's about getting back to the fundamentals and doing this the right way.
Jon: How exactly is Red Hat helping customers ensure that sensitive data and their usage actually remain protected, especially in a world where we are surrounded by hybrid and multi-cloud environments?
Ryan: It's their platform. They run it, we provide security for it, and we do that with our ecosystem. Security and control features are built directly into the platform. For hybrid environments spanning private and public clouds, we can use things like confidential computing, a trusted execution environment to isolate workloads, with attestation. We can also provide cryptographic support for security, data in motion, and at rest.
There are specific features in OpenShift that provide deep controls of Kubernetes and networking to allow for isolation at many levels, such as process isolation, namespace cluster and management of networking. This is all demonstrated in our platform, where we've achieved the highest levels of security, including FedRAMP High.
There are many acronyms that represent different levels of security clearances and capabilities that the platform has. Proof of this shows that Red Hat is a strong player for emerging markets like the sovereign cloud world. Security and operational controls themselves, like PCI, DSS, SOC2, FedRAMP, make us highly preferred for air-gapped and disconnected environments, providing security both physically and around the platform itself.
But it's not just Red Hat. We've been in this space for a while, and we have an extensive ecosystem of security partners. Being the leader in enterprise Kubernetes and Linux, there are partners who manage very complex AI workloads across many different environments.
Jon: For organizations that are looking to start or maybe scale their AI journey responsibly, what advice would you be able to give them, and how can Red Hat then help them move forward with confidence?
Ryan: I think what we just talked about is how you take control of your AI environment or platform.
We provide a lot of capabilities for that. Through our ecosystem, there are many ways to find value, including partners and ISDs that deliver specific use cases around AI that provide real value, which is what we all want to drive for. But you're talking about just getting started, right?
So it's more about where you start with the technology we offer. Red Hat InfraServer based on VLMs is a quick start, it’s just a container. It shows you how to run inference efficiently with the optimized models we have from Red Hat.
Another place to start is how you procure and set up your own environment. We have an ecosystem of hardware partners, with simple starter kits and POCs, to help you get started leveraging the platform. Once you have the platform and get started, I definitely recommend the models for this.
Look at what we have in our playground for your users to see the capabilities within your enterprise and get their incentive projects moving.
For more on Red Hat’s ecosystem catalog, visit its website.