AI Product Chief On Why HPE Has The Right Strategy To Win A ‘Significant Chunk’ Of Public Cloud AI Market

HPE Chief Product Officer of AI Evan Sparks says the company’s new supercomputer AI public cloud offering, HPE GreenLake for Large Language Models, a form of generative AI, provides ‘significant’ cost savings over public cloud competitors.

HPE Chief Product Officer of AI Evan Sparks

HPE Chief Product Officer of AI Evan Sparks

AI Product Chief: HPE Will Provide ‘Significant’ Cost Savings In Public Cloud AI Battle

Hewlett Packard Enterprise Chief Product Officer of AI Evan Sparks says the company’s new supercomputer AI public cloud offering, HPE GreenLake for Large Language Models, a form of generative AI, provides “significant” cost savings over public cloud competitors.

The supercomputing-class public cloud capability that HPE is bringing to the table provides “efficiency and reliability” in running AI models that spell big savings for customers compared with public cloud, said Sparks.

“When you are talking about monthlong jobs that use 1,000 GPUs and you save 20 percent, that savings goes directly to the customer, directly to their bottom line, and adds up to significant kinds of dollars,” said Sparks in an interview with CRN. “The other thing is reliability. When you are leasing [public] cloud GPUs, you pay for them whether they work and solve your problem or not. You might get a node that fails partway through and so on and you paid for the time it took to run your job up until that point, even if the end result is no good for you at this point.”

HPE, said Sparks, is providing a “very powerful alternative to the public cloud” for Large Language Models. “We think this is going to be foundational toward that next generation of computing that we are entering right now,” he said.

As to the specific data HPE has seen with regard to customer savings in initial trials of its public cloud AI offering, Sparks said, “We have a program where we do a total cost of ownership approach with our customers in these kinds of settings and the savings are significant.”

The bottom line: HPE’s battle-tested supercomputing and high-performance compute prowess in building reliable AI systems with good output ends up “being way lower” than rival public cloud offerings because “you are not spending your time refining over and over these jobs that fail,” said Sparks.

Sparks, the former CEO of Determined AI, a highly regarded machine learning AI company that was acquired by HPE two years ago, said he has been “surprised” by how fast the Large Language Model AI market has taken off. “One of my first conversations with [HPE Executive Vice President and General Manager of High Performance Computing] Justin Hotard after I joined HPE was around how big a deal I thought LLMs were going to be,” he said. “I think I underestimated it by a factor of 10 or 50. It has been really an incredible growth story over the last couple of years here.”

Sparks said HPE has the right strategy, people and intellectual property to win a “significant chunk” of the public cloud AI land grab.

“This is exciting because we are intersecting that market as it is really becoming massive,” said Sparks. “Don’t get me wrong. It is also going to be a highly competitive space going forward too. There’s going to be a bit of a land grab. We have to be convinced that we have the right strategy, which I am convinced that we do, and that we have the right portfolio of people and intellectual property to really win a significant chunk of this market and I think we can.”

Why did HPE choose to offer GreenLake for Large Language Models as a public cloud, and is there a credit card swipe capability for the offering?

The big point of why we are bringing this out as a public [cloud] offering is that we think that what we call ‘capability class computing’ technology or supercomputing technology provides necessary infrastructure and necessary capability that doesn’t exist today in the public cloud.

We are the market leader in supercomputing, as you well know, and AI has quickly become a supercomputing problem over the last several years. The problems you run into as you start to scale out and train these Large Language Models mean high density, tons of compute, tons of accelerators in a single node, nodes that are connected with really, really high throughput and low latency bandwidth.

You have to start with worrying about job completion rate. You have to start worrying about the reliability of the overall system rather than just kind of single nodes. We have a wealth of experience in that space and we have been running the biggest supercomputers on the planet for a very long time. That’s what we think this market really needs—that capability kind of class computing.

We also recognize that not everybody who is playing in this space wants to buy a supercomputer and own it 24x7x365 for the next five years. So this allows us to give fractional access to those kind of machines in a way that is really highly secure and designed to be multitenant to a set of customers that needs them. These jobs that people who are generating Large Language Models and so on want to run they are not necessarily—and this gets actually to your billing question a little bit—they are not necessarily the kind of jobs where you need a GPU or two for a couple of hours here and there. You might need 1,000 for a month straight or more. Swiping the credit card for that—I don’t know about your credit limit but my credit limit wouldn’t let me get there. My boss wouldn’t approve that kind of thing.

So are we going to offer that swipe the credit card kind of experience? Not out of the gate. But we are not ruling it out. We would love to be able to offer that experience to the customers as well. These are large leadership-class kind of jobs. So they are going to be bigger and they are going to involve a more involved kind of customized sales process.

What are the big cost benefits that customers will see versus public cloud competitors, and what is the competitive differentiation?

I think the first key pieces of this are our efficiency and reliability, which are directly linked to cost savings. So I’ll give you an example: We have in the software portfolio things like our machine learning development environment, which has certain optimizations in them that allow these big model training jobs to run sometimes 15 [percent], 20 [percent] 30 percent faster than they would run out of the box with conventional kind of open-source approaches.

When you are talking about monthlong jobs that use 1,000 GPUs and you save 20 percent, that savings goes directly to the customer, directly to their bottom line, and adds up to significant kinds of dollars.

The other thing is reliability. When you are leasing [public] cloud GPUs, you pay for them whether they work and solve your problem or not. You might get a node that fails partway through and so on and you paid for the time it took to run your job up until that point, even if the end result is no good for you at this point.

So by building systems that are reliable as we do in high-performance computing designed for good output, the total cost can end up being way lower because you are not spending your time refining over and over these jobs that fail.

Is the cost savings that customers will see because of the supercomputing capabilities HPE is bringing to the public cloud as much as 30 percent out of the gate?

I want to qualify that. It depends heavily on the workload and it depends on what you are running. Everybody has a different deal they are starting with in terms of what they are paying the cloud providers and so on.

You have obviously looked at the cost of the supercomputer AI model compared with the public cloud hyperscalers. What kind of average savings or anecdotal data have you seen?

We have a program where we do a total cost of ownership approach with our customers in these kinds of settings and the savings are significant.

How many customers have you showed the new public cloud AI offering to, and what has been their reaction?

The demand has been really fantastic. Our CEO [Antonio Neri] in our earnings announcement just on ‘CNBC’ has talked about some of the backlog of orders for these large AI systems and supercomputing-class AI systems that we are seeing and it has been a very dramatic increase. Our CEO mentioned $800 million booked just since the beginning of Q2 in these areas. So that’s a public number I can reference. It is pretty fantastic.

Why was the decision made to go direct, and is there a role for solution providers with this new Large Language Models public cloud offering?

The partner ecosystem is essential to us here at HPE in general. A couple of points I’d say there: We are working hard to enable partners and want to help our channel ecosystem partners build profitable AI businesses.

Our partners have access to Tech Talks and in-person AI seminars and our Data Science Summits and our certification programs—all really to help them really develop their skills in this environment. This also includes foundational seller programs to help this community ramp up on their domain expertise because it is obviously one of the fastest-growing markets that any of us have ever seen.

The other thing I would say is these products are part of our Partner Ready rebate program, and we are constantly exploring new opportunities to help create software bundles across industries so that our partners can lead with these bundled solutions. We have a number of examples of this: Our HPE Machine Learning Development system is a bundled hardware/software solution.

We have announced that the machine learning development environment is going to be available via GreenLake for high- performance computing and then, of course, we have our Ezmeral software bundles as well, which can help provide AI and all kinds of analytics to customers in that ecosystem.

Why can’t partners receive play in the HPE GreenLake for Large Language Models market?

It’s currently a direct offer based on the specific nature of the use cases that require a supercomputer—not just traditional HPC. That doesn’t mean we won’t consider including partners in this ecosystem with availability in the future. But it is a new area that we are entering and it is going to take some time for us to mature the ecosystem around that.

How big a demand do you think you will see for this public cloud AI HPE GreenLake For Large Language Models?

I think it could be quite big. Given the use cases and the customer set we are targeting, the other thing in the near term it is more likely to be a small number of bigger customers than lots of smaller customers—again just because of the scale that we are talking about here.

Having no data egress fees with the AI supercomputing model is a big deal. Isn’t this breaking the public cloud model because all the other players have data egress fees for these big jobs?

The data egress fees are a natural part of the public cloud ecosystem and a reality that customers typically have to deal with. It is not like that at every single public cloud, but for the big guys that tends to be the case. We realize that this capability is a piece of a customer’s overall workflows and consistent with our kind of overall corporate strategy, leaning into hybrid as a big piece of the future. We realize that complex multistage jobs are going to involve data transfers between our cloud and the public cloud and so on. So a supercomputing capability is one small very important part of a much bigger workflow. So we need to enable those workflows with as low friction as possible. That is why we made this decision in this particular case. Is it going to apply always going forward? I wouldn’t make that promise here.

You said on the webcast that you are in discussions with ChatGPT. Can you expound on that?

I want to be clear: We are in discussions with a number of other partners for language models. We are going to announce with Aleph Alpha’s [Luminous natural language capability]. It is not just language models. It is other large foundation models that could apply to other industries and so on. I don’t think we are going to be exclusive with one particular model provider in an area. So we are in discussions with other providers of language models.

We are looking at a broad portfolio of offerings in the future. I think it is still early innings to see who the winners will be in this LLM market. So being able to provide our customers choice and being able to let them choose the best, that is consistent with what we like to do in general here at HPE.

So ChatGPT could be part of this in the future?

I don’t want to comment on a specific partner like that.

You have been involved with AI for a decade. How big is the opportunity for HPE GreenLake for Large Language Models as you look out into the future technology landscape?

Personally, I think that generative AI represents a pretty fundamental foundational shift in technology that can scale at the size of mobile or Web 1.0. It feels that big. The pace with which the ecosystem is moving, the number of new companies and ideas and technologies that are being created on a daily basis [is breathtaking]. As a researcher, it’s gotten to the point where if I am not reading the latest papers on a daily basis I feel out of touch. I have never felt something like that in my career.

So it’s an incredible amount of excitement. Do we run the risk of this being an overhyped moment in that cycle? Absolutely. But I think that there will be new technology, new ideas, new foundational efficiencies found in the economy on the other side of whatever it is we are feeling right now. I think the tide will be significantly higher no matter what happens over the next six, 12 or 18 months with the current excitement.

Can you put this new HPE GreenLake for Large Language Models in historical perspective for you and HPE?

We’ve been absolutely strategically focused as a company on this area—data, data-driven decision-making, intelligent applications—for a number of years from BlueData, MapR, Cray, SGI, Determined AI, Pachyderm. All of these have been pointing to creating the foundation for this capability as well as at some level the work that we have done to build things like Frontier—the world’s first exascale system—are proof that we can absolutely play in this market and absolutely provide differentiated and valuable technology right when the market needs it.

So to me this is exciting because we are intersecting that market as it is really becoming massive. Don’t get me wrong, it is also going to be a highly competitive space going forward too. There’s going to be a bit of a land grab.

We have to be convinced that we have the right strategy, which I am convinced that we do, and that we have the right portfolio of people and intellectual property to really win a significant chunk of this market and I think we can.

HPE has hammeredleading up to this the importance of data gravity and latency at the edge. How does the public cloud model sync up with the HPE strategy at the edge where data has gravity and latency?

In the same way we all have computers that sit in our pockets. We are having this call on computers that sit on our desks and we are all using computers that sit in the cloud to process this conversation. You are going to see exactly the same thing play out with AI.

There are going to be models that run in your phone handset, but those might be backed up by models that are running at the near edge in a telco facility or within a CDN [Content Delivery Network] or in the core of your corporate data center—secure, protected, behind the corporate firewall where you keep your crown jewels or they might be in the public cloud.

The way I bifurcate this market in a big way is training and tuning—that is, the construction and the fine-tuning or modification of these models which, generally speaking, you are going to need at least clusters of GPUs if not full supercomputers to do. There’s that piece of it and that is probably going to run remotely for most people. But then the deployment of these models—the inference, running them at the edge and then connecting that data in a continuous way so the models get better over time—that is going to run all over the place. So we are working on further products in this space. At least architecturally I see that as the way the world is going to go.

Why did you call it HPE GreenLake for Large Language Models instead of HPE GreenLake for Generative AI?

It’s a great question. If we called it HPE GreenLake for Generative AI, that is a bigger umbrella. You have to strike the right balance of helping customers understand what they are getting with the product and then still leave ourselves room for further products in this area.

So for Large Language Models, that is the first product we are launching with but HPE GreenLake for—insert another Generative AI application—could be in the future. That’s the piece that I want to focus on. The applications are going to be what’s most important to customers. As much as I like to geek out about learning rates and optimization techniques and so on, most customers don’t want to focus on that. They may have internal AI teams that are focused on that, but a lot of customers want to consume these end applications and these experiences that are going to power their businesses. So GreenLake for Large Language Models conveys the right level of application that we are bringing to the market.

But it’s the same thing right? Large Language Models is Generative AI?

Large Language Models is a type of Generative AI, but you can get into Generative AI that is made out of images and does videos and does music and all kinds of things. If it is just Large Language Models, you are really focused on natural language processing use cases in that case.Is it because it also conveys more of an enterprise focus than generative AI?

There is some amazing, amazing stuff happening. I’ll be demoing at Discover some image generation use cases with context toward marketing and branding. We have seen big brands do full-scale national distribution commercials made with generative technology. This stuff basically didn’t exist six to nine months ago. So it’s been wild to watch how quickly the world has leaned in and adopted for many, many different use cases across the enterprise.

Were you surprised by this whole generative AI explosion?

I am surprised by how fast it has grown in the last 12 months. One of my first conversations with [HPE Executive Vice President and General Manager High Performance Computing] Justin Hotard after I joined HPE was around how big a deal I thought LLMs were going to be. I think I underestimated it by a factor of 10 or 50. It has been really an incredible growth story over the last couple of years here.

What is the difference between what HPE is doing with HPE GreenLake for Large Language Models versus ChatGPT?

One thing is the LLMs particularly once we get to GPT 3.0 size, which is the last generation, those were built on the back of supercomputing technologies. You can look at the public papers, thousands of GPUs running for months or more to generate those models. That continued we believe with the latest generation of larger models. One piece there is that in order to build these foundation models for the last several years, you have needed to be able to have the expertise to leverage these kinds of technologies.

ChatGPT in particular, there is an awful lot there. A lot of that is that OpenAI has done an amazing job of launching a public service that has captured the imagination of lots of people. You can point to issues with it—hallucinations, lack of attribution and so on— these are widely documented in the public press.

In our partnership with Aleph Alpha, for example, they have features in their model that allow for eliminating some of those things. But it is a highly competitive space and I have no doubt that folks like OpenAI will be able to address some of these issues.

Instead I look at what our customers need and what our customers are often asking for is sovereignty of their models. If they are going to share their data with the cloud they want it to be one that they can trust. They want strong conviction that their models will stay private to them and stay private to their use cases. They can run them in our cloud. They can build them in our cloud. But then they can also take them back with them and run them in their own data centers. And if they don’t even want to share their data with us they can leverage our technologies to build those models within the confines of their own data centers. That is an advantage that HPE brings to the table.

How big is the opportunity for HPE partners to make money on AI working with partners?

We remain absolutely committed to our partner ecosystem. We have always been a channel-focused company. As we roll out these new capabilities we want to continue to empower these partners to leverage and build on this innovation.

While we are launching this AI cloud offering, that is only a fraction of our overall portfolio. We still have a lot of other offerings where partners can do extremely well by selling through the kind of more traditional channels. Again, we’ll continue to explore how we can better enable the partner ecosystem as we move forward and this business matures.

Partners have told us that customers are doing AI large language models in public cloud but then when it goes into production they move it to an on-premises or colocation solution because of the high cost. Do you see that?

Absolutely. I think there is a lot of room to do cost optimization, particularly in a space that has become so capital-intensive. If you are thinking one or two machines at a time, then the price and flexiblity you get by spinning those things up and down more than outweighs the extra few dollars you might save. But when you are dealing with scale, I encourage customers always to do the math and think about do we want to do reserved [instances] or do we want to do pay as you go? Do we want to do on-premises and build out the data center and the cooling and power and operate that or do we want to find an in between like a colo provider or a service provider like HPE GreenLake where we can manage the infrastructure as well? That continuum is really important for both partners and customers to understand. You have choice. And, again, one of the things I think we do really well is bring that choice to customers.

Doesn’t putting a supercomputer in the public cloud fundamentally change the cost and the market?

I don’t want to get too over the top and say we are going to completely replace public cloud. That said, I think we are providing a very powerful alternative to the public cloud for this specific class of workload, which we think is going to be foundational toward that next generation of computing we are entering right now. So it is an ability for us to deliver a very differentiated experience to customers in that space.

How do you feel now that you have HPE GreenLake for Large Language Models?

We’re pretty excited.