HPE CEO Antonio Neri On The ‘Fourth’ Public Cloud, Expanded AWS, VMware Partnerships And The AI Opportunity For Partners

Hewlett Packard Enterprise CEO Antonio Neri says the company’s new AI public cloud offering, HPE GreenLake for Large Language Models, marks the emergence of a fourth public cloud with a “massive, massive” data ingress/egress pricing benefit for customers.

An AI Public Cloud Without Data Egress Fees

Hewlett Packard Enterprise CEO Antonio Neri says the company’s new AI public cloud, HPE GreenLake for Large Language Models, provides a “massive, massive” data ingress /egress pricing benefit for customers.

“You don’t have to move the data so you are training privately the (AI) model,” said Neri in an interview with CRN at HPE Discover 2023. “The fact that you are not ingressing data and then egressing it back that is a massive, massive benefit to begin with.”

As to the specific cost benefits of running AI workloads in the HPE public cloud versus other public clouds, Neri said it will depend on the size of the data.

“It could be little or it could be huge because not all models will be that large,” he said. “There are some that will be smaller. The problem is to make the model accurate you need to fit the liar which means more data, more data, more data, more data. The more you feed the data the more accurate the model becomes.”

One of HPE’s biggest advantages in the AI market is its long standing AI software optimization capabilities, said Neri.

“Everybody is going to use the GPUs and CPUs that are available,” he said. “There is no difference between the public cloud and ours (in terms of GPUs and CPUs). They obviously have large scale but what we know how to do is how to reduce the amount of capex through our software optimization so we can extract better performance of the system. So it is not just a cost of commodities, but it is how we manage the entire system for the best full performance.”

HPE already has shown that it can deliver better performance at 200 gigabits with its supercomputers than other systems operating at 400 gigabits. “That is a significant capex beneft that we can translate in a pricing benefit for customers when they consume (AI) as a service,” he said.

Neri also sounded off on the expanded relationships and the potential to work on even further co-innovation with both Amazon Web Services and VMware.

AWS Vice President of Technology Dr. Matt Wood shared the stage with Neri at HPE Discover, highlighting new HPE- AWS hybrid capabilities including support for Amazon EKS Anywhere managed Kubernetes for HPE GreenLake for Private Cloud Enterprise and HPE Nonstop Development Environment now available on the AWS Marketplace.

VMware CEO Raghu Raghuram also shared the Discover stage with Neri highlighting the new HPE GreenLake for VMware Cloud Foundation

Here is an edited transcript of the conversation with Neri.

How does it feel to be in the public cloud business?

We have got to execute and things take time. The way we have gone about it is the typical way HP in the old days would do it which is deep execution on the engineering side that creates long term value for shareholders and brings the partners along that journey. Because ultimately they are as good as we enable them with the technology and the understanding of it.

How does this AI public cloud change the character of HPE and the GreenLake value proposition for partners and customers?

It puts us in the cloud sphere. There are now four clouds- not three clouds (AWS, Microsoft, Google Cloud), but this is a very special capability driven cloud that allows us to enter the market with our unique intellectual property. It is not the typical way (public cloud) has been built which is a lot of capex and a lot of software around it that took maybe 16 or 17 years to grow.

We are intersecting a major inflection point where these large language generative AI models have become very mature and still need more development but it is intersecting our unique intellectual property. In this case it is the supercomputer aspect of it. That is why it is almost like you get to take a second bite of the apple – the apple being the cloud.

If you look now at what we have done we focus on the edge, the edge was all about being connected. At the edge we are bringing (AI) inference capabilities to expand the number of inference systems and ways to engage. Obviously mobile phones are inference devices.

Inference is a big opportunity for us. We have the connectivity, we have the computing power, we now have more software. As we make these AI models available for training and tuning then we can push it down to the inference.

In the traditional cloud space, we have been hybrid now for some time and the expansion with (data center colocaiton provider) Equinix gives us now completion of on prem and off prem in an on premises kind of approach which is basically a colocation strategy.

Then we have partnered with the three public clouds. You have to because as I said without the public cloud you have not completed your hybrid strategy. But now we have added a fourth cloud which is our own cloud, very uniquely positioned for this AI inflection point. And all of them whether it is the edge, the colo, the on premise data center or the three public clouds or our own cloud are all delivered through one unified experience HPE GreenLake.

So when I think about the value for customers and partners you can do everything you need from one platform. You can broker services, you can connect devices, manage infrastructure, data analytics or AI type of alogrithms. So you can do it all under one platform. And then that platform is open for our partners to add their value on it. Obviously we continue to work with them on APIs and services.

What is the big difference between the HPE GreenLake for Large Language Models public cloud versus AWS, Google Cloud and Microsoft?

Our strategy in AI is a software led strategy to begin with. It is a subscription model to these AI applications which are going to be very vertical in nature. LLMs tend to be more of a horizontal thing. You can apply different use cases but the value will be in the vertical AI applications that address specific opportunities and challenges whether it is diffusion, whether it is computer fluid dynamics, whether it is molecular research, whether it is energy transition or whether it is climate or autonomous driving in the future. These are very vertical use cases with unique AI challenges so we are going to continue to partner and make available our own AI applications because we have deep expertise built on years and years of optimizing AI models in very unique ways whether it is climate, life science and so forth. We are going to make those available and then we are going to bring partners like (German AI startup) Aleph Alpha (which is partnering with HPE to deliver its Luminous natural language capability to the HPE AI public cloud) as a great example. They will make their models available to it.

If you are customers that want to come and train your own model meaning a model you have built from scratch you can consume supercomputers as a cloud in an IaaS – infrastructure as a Service. There we are really focused on making it available as a multi tenant in a reserve capacity because you need a lot of GPUs, CPUs and memory. But then if you are consuming in a subscription model the AI application what you do is basically subscribe to the app and then the infrastructure comes with it as part of the pricing and then you privately train your data.

So for partners it is great because there really have been very, very, very few partners that have been able to enter supercomputers and high performance computing – which are two different things.

Generally there has not ever been a space for them (in supercomputing). But now all of a sudden they expand their TAM (Total Addressable Market) as we are expanding our TAM to be able to be relevant (in the AI market). But they will have to focus on the services side, more consulting, advisory with understanding of how to develop or tune these models. They don’t need to know anything about infrastructure because infrastructure is delivered for them through our AI cloud.

So what do partners have to do to align with the HPE AI opportunity and make the most money?

It is the services business. If you are a vertical focused partner in oil or gas, for example, you obviously want to go a little bit deeper and understand some of the data science. But generally it will be more software AI engineers that can translate the requirements into an AI model that they can train and tune all the time. Then think about the AI lifecycle journey that I showed yesterday. This is where the partners will work really well because they understand the edge, they understand computing, so they can deploy the training model down into the inference because that is what they have known and been doing for decades. Now that the (AI) model works it is about how do you put it into production and deploy it and manage the stack in edge environments.That’s the opportunity here.

How big is the Total Addressable Market for this opportunity?

The TAM for us just in the supercomputer (business) has more than tripled for us.

So you have to focus on the expanded TAM which is services oriented. This does not include the TAM at the inference which is the traditional way. That is why we unveiled a complete portfolio from training and tuning with AI cloud to inference with a very specific set of recipes around Gen 11 ProLiant which is a system that can do inference at scale. That is why I call it from edge to exascale.

What is the cost difference between public cloud and the HPE AI public cloud since there are no datea egress fees with your public cloud?

You don’t have to move the data so you are training privately the model. The fact that you are not ingressing data and then egressing it back that is a massive, massive benefit to begin with.

How big a cost advantage is that?

It depends on the size of the data. It could be huge. It could be little or it could be huge because not all models will be that large. There are some that will be smaller. The problem is to make the model accurate you need to fit the liar which means more data, more data, more data, more data. The more you feed the data the more accurate the model becomes. At some point you have to realize what is good enough and accurate enough for me to leverage it.

On the cost for computing one of the advantages we have in my mind is our software optimization. Everybody is going to use the GPUs and CPUs that are available. There is no difference between the public cloud and ours (in terms of GPUs and CPUs). They obviously have large scale but what we know how to do is how to reduce the amount of capex through our software optimization so we can extract better performance of the system. So it is not just a cost of commodities, but it is how we manage the entire system for the best full performance.

People say well you need to go from 200 gigabits to 400 gigabits to 800 gigabits. The reality is that today we can show that even at 200 gigabits our system can perform because of our software, particularly on the contention side for supercomputing like any other system that may be operating at 400 gigabits. So that is a significant capex beneft that we can translate in a pricing benefit for customers when they consume (AI) as a service.

Will HPE look at HPE offering more public cloud capabilities?

I don’t think so because that is a very capital intense business. We believe we can achieve the same thing through partnerships. But then we went to colocation which is something that none (of the public clouds) have been providing extremely well. But that’s why we take our private cloud portfolio with PCE (Private Cloud Enterprise) and PCBE (Private Cloud Business Edition) pre-provisioned for fast deployment and simplicity of management in a cloud environment we offer that as another instance in GreenLake – no different than an instance in a public cloud.

Customers will decide based on the gravity of the data, the compliance of the data, and the cost related to these workloads. Because the more data intensive the workloads are the cheaper it is to run it yourself. That is a fact.

Now maybe for space or power issue or constraints maybe you can’t do it in your own data center because you don’t want to grow it but then what you do is you take certain workloads and move it to the colo still under your control and managed under GreenLake and then free up capacity for the other things you may want to do in your own data center.

So for us to enter the (general) public cloud (market) makes no sense. That train has left, but this AI is totally different because we have something that others don’t have. We believe we can compete. But at the same time it is very important to understand there will be opportunities to partner with our cloud and the public cloud even in AI. Because sometimes a specific AI model may require more intense supercomputing the public cloud can offload that to our cloud, and there will be other things that we may offload to the other (public) clouds.

You had AWS here at HPE Discover. Talk about the AWS relationship and the opportunity there as a hybrid cloud capability?

It starts with the traditional workloads. I think there was a realization on both sides just to be clear and this conversation started a while back between (Amazon CEO and former AWS CEO) Andy Jassy and myself. It is not like something that happened overnight. That was two years ago.

Two years ago we started a conversation about a way we could do more together. So we started understanding what the market opportunity was, but it was really driven by the customer pain points. They understand the world became hybrid.

Even Andy standing on stage maybe a year ago said only a few percentages have moved to the public cloud. He said that. And then on our side it was also realizing that obviously the public cloud has been a significant driving force and that without the public cloud we are not going to give customers a complete hybrid experience. So we both came from a customer driven point of view. Then we said from a product management perspective how do we solve those challenges in a co-innovative, kind of innovation way.

Everything that (AWS Vice President of Technology Dr.) Matt (Wood) covered yesterday (on the HPE Discover main stage) is a co-innovation. It is not a commercial relationship. The commercial comes after. The question is how do we do hybrid backup and recovery? How do we do storage? How do we do other things?

There is a realization that- and we already started even talking yesterday- there could be an opportunity to partner in the AI space. They already understood that which is why we made our NonStop Development Environment available (in the AWS Marketplace). We have unique expertise in these type of mission critical or AI driven workloads that they could benefit from and we could benefit from on both sides.

It is when trust is built more opportunities open up. I like working now with (AWS CEO) Adam (Selipsky).Obviously Adam has been a driving force inside AWS. We talk to Adam…We talk to people like Matt. So we will continue to evolve what is the next opportunity. But I think with AI for sure will be something we will work together (on).

How many HPE developers are now doing co-innovation on AWS?

Look at GreenLake where a lot of our instances run on AWS. It is developers that use many of the tools. We use a lot of their tools. We built our own tools. We run some of the instances on AWS. It is very symbiotic in many ways. It is good for customers and then by definition because of our strategy it opens up everything to our partners.

Now it is about scale. Think about 65,000 unique customer instances are mapped to 22,000 organizations and those 22,000 organizations manage 2.5 million devices. What is astonishing that hopefully people will grasp is through our cloud HPE GreenLake we manage with traditional storage because it is a subscription model now or because it is a block of service under the traditional GreenLake offer we manage 17 exabytes of data. Now you have two moving at the same scale. You have exabytes and exascale.

Therefore we can operate at exascale level which is important because as data grows there is more insights you can grab from that data. The difference is that in the past most of the data was structured, relational databases of sorts. That had limited insights. You could do some business intelligence but in the end when you go to this cloud world and now with AI you are deeply, deeply into unstructured data. That is where a lot of the magic could happen because there are a lot of insights that are trapped that you have to harvest. That is why AI is a major inflection point because that opens up all those insights and then those insights can be trained like a brain. Think about your brain. The more you feed the brain the more it understands and the more things can be developed.

How important was it that VMware CEO Raghu Raghuram (pictured) was here on stage with you at HPE Discover announcing HPE GreenLake for VMware Cloud Foundation. Has there been a change since VMware split off from Dell Technologies?

I don’t have the latest data but I think HPE is still the largest partner for VMware. But Raghu obviously he had to navigate through a few twists over the years from EMC to Dell.

Obviously if you are a software company that is kind of neutral that is a problem because you constrain your market if you are not partnered with the rest of the vendors in an equal way. Then after the (Dell) spin off that became better. And Raghu and I since we have known each other for 15 years we have never lost touch, we never lost contact. It is obvious the environment changed around it so we were able to do certain things that were a little bit more leading edge like making VMware Cloud Foundation available though GreenLake. It is a little bit of a co-innovation with a business model that is different – a subscription model. And then there is further opportunities down the road. The fact that he was here is a testament that we both value the relationship and the partnership. I think there will be more things we can do together.

Now that you have the AI public cloud are you structuring the company differently with R&D?

So (HPE Executive Vice President and General Manager High Performance Computing) Justin (Hotard) (pictured) owns all the R&D for AI and he owns also HPE Labs. Three years ago, I re-pivoted HPE Labs with Justin to be 100 percent focused on these leading technologies that support AI whether it is silicon photonics, whether it is AI software. All of that has been focused in that (AI) direction.

Now he also has a complete dedicated go to market (responsibility) from selling capacity to presales which is super important aligned with our advanced professional services on AI expertise and specific workshops we are doing. In many ways a lot of solution engineering has to take place (for these AI workloads).

It is a different business model to begin with. (HPE CTO) Fidelma (Russo) obviously owns pretty much the entire hybrid cloud from platform to execution.

(HPE Executive Vice President and General Manager Aruba HPE Intelligent Edge) Phil (Mottram) owns the entire edge. He is doing a terrific job.

And (HPE Executive Vice President and General Manager Compute) Neil (MacDonald) has the usual grinding computer (business).

I am proud of all of them because even Neil has done a fantastic job of compute, grinding it through, driving a level of profitability that is beyond insane and honestly dealing with all the supply chain challenges.

(HPE Executive Vice President and General Manager Storage) Tom Black has also done a remarkable job.