HPE AI Growth Exec On Why HPE Private Cloud AI Is Superior To ‘Grab-Your-Own-Puzzle-Pieces’ AI Solutions

"For years some companies spent a lot of money on AI and may not have had any ROI or even return on energy,” said HPE Global Head of Growth AI Solutions Dale Brown. “So we asked, ‘How do we promise that and give them one throat to choke?’ We’ve got to build the whole system, optimize it and give them a user experience where they actually have a framework of getting something done.”

HPE Global Head of Growth AI Solutions Dale Brown told CRN that the company’s turnkey Private Cloud AI system provides big benefits over the “grab-your-own-puzzle-pieces” AI solutions in the market.

“While I won’t talk about competitors, I will say this: We could have taken the tack of choose your own journey, grab your own puzzle pieces and then you assemble your puzzle,” said Brown, an AI veteran who is one of the driving forces behind the HPE Private Cloud AI sales strategy. “But we knew in an early market customers want to make a low-risk decision.”

The turnkey HPE Private Cloud AI service offering, which was launched at HPE Discover in June 2024 and shipped in September 2024, was designed from the outset to give HPE a market advantage in getting customers up and running quickly with proven enterprise AI solutions.

The build-your-own AI Factory solutions could take customers a “year to get up and running” compared with “hours” to get up and running with HPE Private Cloud AI, said Brown.

“So the time to value is not only do I give you something predictable and I am accountable for it, but I have something you can start working on immediately,” said Brown. “If you are going to do it yourself, you might take months or a better part of the year and you may not have the skill set.”

Ultimately, HPE Private Cloud AI has turned the tide on the perennial problem of customers investing heavily in AI, building their own AI Factory solutions without a sure ROI, said Brown.

“For years some companies spent a lot of money on AI and may not have had any ROI or even return on energy,” said Brown. “So we asked, ‘How do we promise that and give them one throat to choke?’ We’ve got to build the whole system, optimize it and give them a user experience where they actually have a framework of getting something done.”

Brown said HPE Private Cloud AI has been a game-changer for enterprise customers. “AI has to be consumable for the enterprise, and it has not been,” said Brown. “So we built this thing [AI Essentials software layer for HPE Private Cloud AI], and that is what is disruptive. It’s the way that we have engineered it deeply.”

Brown’s comments come with HPE Private Cloud AI gaining momentum in the market. During HPE’s first-quarter conference call on March 9, HPE President and CEO Antonio Neri said Private Cloud AI orders increased for the fourth consecutive quarter with a “substantial number of new customer wins” across both enterprise and service providers.

“We’ve been in the market for 18 months,” said Brown. “The reality is this is a proven system, and enterprise users who were at zero miles an hour, we’ve gotten many of them up to 100 miles an hour in doing AI things, and they are on their second and third use cases that are in production.”

Brown said he is proud to have been a part of the HPE team that has delivered a full-stack turnkey AI solution to customers. “I have been in AI for the better part of a decade,” he said. “I have seen it all and I started in the deep learning space with computer vision. Never before have I seen a company that actually has the desire to take a full-stack solution and deliver it to the customer in such a comprehensive way. I’m very proud to be a part of that.”

Brown said with Private Cloud AI HPE is delivering something that is “incredibly disruptive” in the AI solutions market. Other competitors, he said, have tried to provide a similar experience but “haven’t succeeded,” he said. “This is not easy. That is why I am proud of this company and my participation in it.”

Brown said his call to action to partners is to team with HPE on a co-selling HPE Private Cloud AI sales motion. “I built an AI sales team at HPE that sells solutions and outcomes,” he said. “And I will do a co-sell motion with my channel partners. So we can walk into an account together. Me and my team know how to help a customer find the problem and help solve it. So we can get a channel partner set up for success with their sellers and with their solution people. Then, frankly, we do our part and get them started so they now have a complete surface area to go execute and deliver additional value. We will actually co-sell with them to help them win in their accounts. We are committed to that because they have the intimacy and we have the technology.”

What is the difference between HPE and the competing Dell Technologies AI Factory solutions?

When HPE sat around the table and said, ‘We’ve got all these assets. We’ve got all the ingredients to do something really great for the customers.’ But how you serve it up matters in a new market. That was No. 1.

No. 2 is when we sat around the table and said, ‘Now HPE plus Nvidia, when you actually stack those sets of IP, it’s actually meaningful. So how should we do it?

While I won’t talk about competitors, I will say this: We could have taken the tack of choose your own journey, grab your own puzzle pieces and then you assemble your puzzle. But we knew in an early market customers want to make a low-risk decision.

For years some companies spent a lot of money on AI and may not have had any ROI or even return on energy. So we asked, ‘How do we promise that and give them one throat to choke?’ We’ve got to build the whole system, optimize it and give them a user experience where they actually have a framework of getting something done.

This was done just at the time when Nvidia was working on blueprints, and we said, ‘This is a perfect marriage because the epicenter of that IP and ecosystem is going to be the blueprint.’

So if you say, ‘You are an enterprise company, I’m new to AI, we’ve got some skills and I’ve got some people that have done some things in the lab, but I can’t take what they did and put it into production because it’s not secure. It’s not ready; it doesn’t scale all of that stuff. So I need the tools to be robust and bulletproof, but I also need this thing to work for both people in a lab and getting it to live.’ And I say, ‘What is it you’re trying to do? And what is the problem statement you have?’

And they’ll say, as an example, ‘I want to be able to chat with my documents for my customer service people. My problem could be a litany of things. It’s a complete in-the-rough thing. It could be access to my data. It could be I’m worried about this producing something it shouldn’t and saying some bad things that could put me in the press in not a good way.’

This is what is different about our sales motion as well. We show them, ‘This is your problem statement,’ and we demonstrate that use case for them.

Sometimes we’ll have synthetic data or we give them access and they can bring their own data. So they can actually make a low- risk decision and they can see an outcome in the construct of this system.

It is an optimized system with one throat to choke with a complete stack. We include implementation and all of the on-boarding, training and all of that after the sale. We’ve shown them how not just to boardroom-solve the problem but tactically how to knock down the barriers to getting results.

I would argue that the combination of our tech and tools is evolutionary. But that’s not how the customer buys this. They buy this for the outcome.

It’s not a stack of feeds and speeds. And while our entry point might be the IT people, the IT people get the AI practitioners involved because the AI practitioners may not have ever cared about speeds and feeds. The same is true for the channel.

What is the big breakthrough with regard to the HPE AI Essentials software layer with Private Cloud AI?

AI Essentials and the way it interacts with Nvidia’s tech is what’s unique. Nvidia does amazing things and helps you engineer things deeply and completely, and they’re fit for production.

AI Essentials is a horizontal environment where you can actually open up a project, build something, and you have a cogent beginning, middle and end to testing things and deploying them.

And so the secret sauce is Nvidia helps you go deep, and AI Essentials helps you go wide.

AI was born by an industry saying, ‘We’ve got some cool tools in tech, and then a bunch of point solutions sprout up, and that makes the customer suffer, and they take the risk.’

AI Essentials and Nvidia’s full tech stack, along with the acceleration and all of our tech, remove the risk of actually knowing what to do next, getting to an outcome in a predictable way, and everything’s optimized.

It’s not like my flour or sugar is any better than anybody else’s, as much as the way I assemble it and I deliver it. It’s a nice ‘Happy Meal’ with a toy inside so it is something that people can consume.

AI has to be consumable for the enterprise, and it has not been. So we built this thing [AI Essentials] and that is what is disruptive. It’s the way that we have engineered it deeply.

Our industry when we sell widgets is all about selling the latest tech, but the reality is it is all about how it shows up for the customer integrated and ready to go. That’s what we do.

So you have to buy HPE Private Cloud AI to get AI Essentials? You cannot get it with an AI Factory solution?

Our meta positioning is for product is it’s a turnkey AI Factory for the enterprise focused on removing all the friction for inferencing. So it’s not a training platform. It’s all about taking a language model and data and, frankly, a multimodal model and blueprints and creating something that works and scales.

So we’re dealing with enterprise use cases, and so it’s really relevant. The way this product was built is really relevant for enterprise inferencing use cases whether it’s agentic, GenAI or whatever.

So it doesn't necessarily apply the way it is today to the whole [HPE AI] family. Now what does the future look like? Who knows as far as the strategic decision.

Our AI Factory out of our sister organization is dealing with training and inferencing, but they’re dealing with rack scale. They are dealing with thousands of things.

If you just want transportation for the young adult in your family, that is a different car than if you want a family vehicle that can carry six people or a weekend car where you can put the top down and have a good time. Those are different cars.

That is the same argument about an AI Factory. I might give a different analogy, but we built something fit for purpose for the enterprise that gives customers one throat to choke and takes the risk out of it. We just hand them the keys.

If you are going to build an AI Factory, you are going to wait a period of time and maybe even longer now with market supply conditions. We can build this and ship it. It ships in a rack, and it is all integrated. We are up and running in hours for a customer.

We have been so successful with this product, but we had to educate people and change their expectations because they were used to choosing their own journey with a bunch of puzzle pieces. And we’re like, ‘No, you don’t need to do that.’ And now they’re like, ‘OK, now I get it.’ And, by the way, they can still bring their own tools. They bring those tools with them, and they can put them in, but we give them a framework to do that.

What’s the biggest takeaway from the latest announcements including HPE Private Cloud AI additions at Nvidia GTC 2026?

So here’s the decoder ring: Everything about AI is contextual, including how you apply tech to get to done. But it is confusing because the tech is changing too fast. Customers are still learning what they want to do and how to buy it.

It’s like going through a haunted house. You pay the money and you get in there, but you don’t know what you’re going to be hit with until you get into that room.

So we are actually saying, ‘We’re setting the enterprise context for how you get AI done.’ So when you look at that list of stuff that we’re doing, we’re saying that it doesn’t just give the customer access to the tech, but it’s in context.

So in other words, if you’re going to go build a digital twin you have to have access to your artifacts to help you build that and the data repository.

You’re going to have to actually take a blueprint, which is a template basically, and then you’re going to have to apply some other tools in tech, and you’re going to have to deploy it and test it. We’re giving you the ability so it all works. That puzzle fits in this context and you can do something.

Let me give you the antithesis of this: Let’s say that you were going to DIY an [AI] system. You would have to know how to create a container. You would have to actually optimize that model in that container for the GPU it’s going to run on. The average customer does not have that level of expertise, let alone knowing how make that work behind an application with a database to ensure it is secure.

When it comes to AI, customers don’t have the skill set to do that in these in these enterprise environments.

That’s why we’re saying we’re taking the risk out of it for customers and we're taking the friction out of it. So when we talk about acceleration it is all about where are the points where someone will get stopped? How do we pre-answer the question and give them a track to run on that’s clean and clear and free of rock?

You’re not going to take thoroughbred racehorse idea and put it on a track with potholes. Your horse is going to break its leg. You’ve got to get rid of the rocks and you’ve got to grade the whole track. What we’re doing is saying, ‘Even for your youngest horses and your oldest ones, this is a track they can run on and win, and you can be predictive about how they win.’

Is HPE the only one that is getting customers up and running quicker with a faster ROI? Is HPE the only true enterprise AI company?

We have set the market based upon the same way that we set car buyers up, which is we want the latest and greatest. The problem is that we have to help customers in the enterprise apply that. We have to show them how that actually works so they can operationalize it.

The big difference is we help them operationalize it with a context. But let me answer your question: In the market today this is the full-stack turnkey AI Factory built for inferencing in the enterprise where we take full responsibility for it being integrated, deployable, optimized and even provide post-sales AI engineers to our customer for a period of time to show them how to use the tools and get their use cases built.

You’re right. Other people have, and I’m not speaking about competitors, but in my experience, other people have a choose-your-own journey. Other companies have a choose-your-own journey and so the customer has some level of culpability and responsibility. We at HPE are the only full-stack solution for a turnkey AI inferencing factory today in market.

Are others doing that? I can’t speak to that. Could there be some other announcements?

We’ve been in the market for 18 months. The reality is this is a proven system, and enterprise users who were at zero miles an hour, we’ve gotten many of them up to 100 miles an hour in doing AI things, and they are on their second and third use cases that are in production.

That success is a result of HPE Private Cloud AI?

Absolutely. That is why I use the word ‘turnkey’ because it is the only turnkey solution for the enterprise in AI today, and it’s focused on optimizing that inferencing layer, which is really where the enterprise is going to live. It’s not going to live in the training world. It’s going to live in how I take artifacts and build something.

You could take the Meta model. You could take Gemini. Then you turn it into an application. That is what this is going to be about for the enterprise. They are trying to find new ways for profitability and efficiency.

Why is that so important for partners and how they make money?

The beauty of the HPE partner channel is they do an amazing job of managing customer intimacy, the first mile and the last mile for the customer. In AI, that matters more than anything because it’s not about selling SKUs and widgets, speeds and feeds. It’s about selling an outcome. So what we do is we give the context to the channel and to their end customer for places to deploy things very predictably, which means the channel partner is establishing a node, a beachhead inside of an account where they can actually do additional services, additional consulting, add on additional capacity, and they are at the epicenter of a durable AI relationship with their customers.

We’re teaching all of our partners how to build the blueprints. We’re teaching them how to think about these applications, and we’re teaching them how to really capitalize on building services and value-add. So we’re putting them in the driver’s seat.

If a partner chooses a choose-your-own journey, they might be able to build that for a customer once, but that might take a year before that customer is up and running building applications. We can do that in weeks.

Channel partners make more money, candidly, on services work, whether it’s a managed service or whether it’s other engagements. So we accelerate the time to value for them too.

If they have multiple customers with Private Cloud AI, they are operating in multiple layers of context: What infrastructure do you need? What are you building? What’s in production? How can we help you? Do you need a workshop? Do you need training? Do you want us to build your blueprints, and we’ll manage the library for you? Do you want us to provide managed services on the whole box?

We have partners who are building these services and doing an amazing job. And here’s what else: It levels the playing field because the partner intimacy with AI is what gets monetized. It’s not about how big the partner is. So this might be as meaningful for the boutique shops as it is for the big guys in the channel.

What is the biggest obstacle in the channel that you would like to change to get more partners to adopt HPE Private Cloud AI?

It’s not a channel problem. It’s a customer challenge is how I would say it. Many customers and sellers have to realize that AI is contextual and a business conversation first.

There’s no reason to do AI in the enterprise unless there is a payoff. Talk about that first. When I get front of a customer and they’re talking about speeds and feeds too soon, and I don’t know what they're trying to do, we’re both losing because it’s not about the prettiest red Corvette. It’s about where are you going to go in that Corvette.

I have my sellers talk to customers about what are the outcomes? How are they going to get there? What are the current hurdles? And how do we help them solve the hurdles? That’s my message to the channel: If you want to have a strong book of business in AI, find the worry and the pain points of what’s stopping customers today and show them how you take that away, whether that’s a Statement of Work, whether that’s HPE Private Cloud AI, whether that’s educating them. That is where you can actually be truly influential and manage that whole book of business because you become the provider of AI, while everyone else is giving them puzzle pieces.

How big is the CrowdStrike agentic security AI support for Private Cloud AI and the certification for Fortanix Confidential AI, a joint solution with Nvidia?

That’s the reason we used third-party tech to build Private Cloud AI. The first part of that was the co-engineering with Nvidia. That was the No. 1 thing, but the [HPE ISV] Unleash program, and specifically those two partners—Fortanix and CrowdStrike—are mission-critical for customers that have certain needs.

So when it comes to Confidential AI, the thing to remember is you have to make sure that the data is encrypted at inception, and you have to make sure that the system is locked down from boot. That’s what Fortanix does. CrowdStrike is protecting against threats.

That covers what is happening on the inside, from top to bottom— all of the data is secured, and nothing’s going to threaten it. And then what’s happening on the outside is that you’ve got the entire infrastructure protected. So it's sort of like north, south, east, west [protection]. And why are we using partners? Because it’s best of breed.

Our partnerships are going to be based on what customers want. We want to meet them where they are. Do they want CrowdStrike and Fortanix? Yes. So we’re going to have that as part of the solution. By the way, it doesn’t just stop with Private Cloud AI. That’s also true with the ProLiant servers.

This is why all of my people and my solution architects have to be trained on all the HPE Unleash [ISV] partners. We’re saying we’re going to remove friction so we have to know what you’re trying to do in context and then give you the path to get there.

What is the biggest untapped opportunity that you see with the new HPE Private Cloud AI offerings?

It is absolutely sovereignty. Because it used to be we thought about sovereignty as a public sector thing for defense and government intelligence. No more. Now sovereignty has a lot to do with all of the regulated industries because the threats are mounting. The bad actors are learning how to use technology against the good actors.

So sometimes the only way to defend against that is to have a sovereign air-gapped solution where you not only have physical separation but you have all of the policies and tools on the systems.

You’re going to see a lot more customers asking for that. So a health-care provider is going to say, ‘I have to have a sovereign solution with all of my patient records.’ So they are going to have to rethink their systems because having it available on the internet, whether that is on-premises or the public cloud, poses too much risk. There is too much that can happen.

It’s a numbers game. And guess what? Bad actors are going to use efficiency and AI to potentially generate those numbers and generate the attacks.

The surface area of sovereignty is going to explode. It’s going to be the de facto standard. So if you’ve got to bring AI to the data, you’re going to want that be separated from everything else, depending on what that data is.

How big a breakthrough is the air-gapped solution for HPE Private Cloud AI versus an AI Factory configuration?

When you think about Private Cloud AI and you think about us performing functions remotely, we’ve had to have those strategies for the customer. So batch processing updates and all the things that we normally do in the box has to happen differently. It has to happen where the customer may take responsibility and apply that, but we still have to give them the assets.

We also had to think of the construct so that there was still a cloud experience. So here’s the difference. Most AI Factories don’t have a cloud front end where you’re just giving the customer a login then they’ve got a cloud environment. It’s not that frictionless because you’re building it for AI practitioners that know how to use the tools that are very well paid and have a whole farm of people.

For the enterprise to have a cloud like experience, you need to make it simple. But to do that you need to do a whole lot of complicated technical work to have that air-gapped instantiation. So we’ve done the work top to bottom.

It’s not just about the orchestration layer, and it’s not just about the user interface layer. It’s about all aspects that had to be separated so it can operate like a cloud, but in a very private way, in an air-gapped mechanism. It’s more evolutionary than revolutionary, to be candid, because we had to put it in context and make it so that it’s bulletproof. But we also had to make it so you can live with it. You can get updates. You have a way of getting patches applied. We supply you with the library, but you have a way of grabbing those, validating them, and then applying them with your own rigor.

So we set the customer up for success in the context of sovereign. Other AI Factories can run off net. They can run air- gapped, but they don’t have the same level of complexity for the user because their users have to be the experts. We have to make it simple.

What are you most proud of when you look at all the HPE GTC 2026 announcements personally and for HPE?

I have been in AI for the better part of a decade. I have seen it all and I started in the deep learning space with computer vision. Never before have I seen a company that actually has the desire to take a full-stack solution and deliver it to the customer in such a comprehensive way. I’m very proud to be a part of that.

For me, personally, one layer deeper, I am proud that I was the sales leader that inaugurated, launched this product from scratch, sold customer one, built the team, worked with the services team, worked with the business unit across all these towers in this big tech company—and we are delivering something incredibly disruptive that we are going to be talking about for years because other competitors … have tried to do this and they haven’t succeeded. This is not easy. That is why I am proud of this company and my participation in it.

It’s been amazing. To hear customers say they transformed their business with Private Cloud AI, they couldn’t do it before and we enabled that. There is not a lot of people that can say they delivered things to customers, and it is complete and it works. Customers have used it successfully. There is a whole lot of shelfware in the world, and this isn’t that.

What is the ROI customers see in terms of getting up and running quickly with Private Cloud AI?

We have a tool that provides estimated time to value and ROI. We don’t put a blanket statement out because it could get misquoted.

What do you say to customers looking at a public cloud approach or an AI Factory model?

If you are going to build it in the [public] cloud, you could go fast but it may be that you don’t really have an application because you just use their services. So it depends on how you feel about intellectual property.

If you are going to DIY, you are going to have to multiple skill sets across seven different pinnacles that are architect-level to build and integrate this.

So you could either go fast and have a long-term challenge or you could take longer but it could take you a year to get up and running while I can get you up and running in hours once I ship Private Cloud AI.

So the time to value is not only do I give you something predictable and I am accountable for it, but I have something you can start working on immediately. If you are going to do it yourself, you might take months or a better part of the year and you may not have the skill set.

What I do is put it in the context of making it a reality to the business. What I need is what is the business outcome they are trying to achieve? Is it cost takeout? Is it generating revenue? We can give them all of that because then they can make board-level decisions.

The reality is we get it up and running in frankly less than a day and the customer starts building things that afternoon. That is unheard of. That is what they have come to expect from public clouds, which is great. But now we can actually bring the AI to the data. It’s more about their context.

But we do have customers that have ROI of less than a year on an application they launched. We had one customer that bought HPE Private Cloud AI and in eight months they got a complete return on the solution. It meant so much cost savings for them.

What is your take on the AI Factory moniker versus HPE Private Cloud AI?

Here is why AI Factory is a great moniker in its own right. People understand in a factory raw materials in and finished product out.

But guess what? A factory for processing food versus a factory that builds cars are different factories. So it is all about context. The reason Factory works is because Nvidia has done a great job talking about tokenization. So we are going to see customers more and more look to tokenizations as a measurement of efficiency and capacity in their factories. In the coming years, the factory has deeper context and meaning in the populace.

How big a breakthrough are these latest HPE Private Cloud AI announcements?

When you look at 128 GPUs, the system when we first launched it configurations were like four GPUs and eight GPUs. We quickly realized customers wanted headroom.

As we got into the era of agentic AI, we learned that some agentic AI systems need more horsepower, and they have more requirements in parallel.

In other words, you’re going to spin up one, two or 10 so you need more real estate of acceleration. So when we originally stopped at lower GPUs, customers said, ‘I need to move up. This is our first tranche foray.’ Most of them aren’t buying 128 GPUs out of the gate, but they want the headroom because as they get success they’re going to want to go faster. So the 128 GPUs is disruptive because it gives the customer runway to do what they want to do in AI. That’s why it’s important.

The air-gapped [HPE Private Cloud AI] is about a new market space, but it’s also about a new way of thinking about safety, security and trust.

The other things are about removing friction from the tech stack experience and allowing them to consume the tech the way that they need to, whether it’s in Private Cloud AI or the ProLiant series. Those things are important, and they are table stakes if the customer is in that use case.

Do 128 GPUs open up the total addressable market?

It doesn’t matter with total addressable market, but it gives them the headroom to grow because after they start to get some success they want to do more. So they have got somewhere to go.

What is the call to action for partners with HPE Private Cloud AI?

That’s an easy one. I’ve been selling AI for 10 years. I built an AI sales team at HPE that sells solutions and outcomes. And I will do a co-sell motion with my channel partners. So we can walk into an account together. Me and my team know how to help a customer find the problem and help solve it. So we can get a channel partner set up for success with their sellers and with their solution people. Then, frankly, we do our part and get them started so they now have a complete surface area to go execute and deliver additional value. We will actually co-sell with them to help them win in their accounts. We are committed to that because they have the intimacy and we have the technology.