HPE Discover 2023: CEO Antonio Neri’s 5 Biggest AI Statements

HPE CEO Antonio Neri sounds off on the company’s AI future including his no holds barred AI partner commitment and the importance of making AI available to businesses of all sizes.

HPE Enters The AI Public Cloud Arena

Hewlett Packard Enterprise CEO Antonio Neri shook up the IT industry this week by unveiling a new HPE AI public cloud, HPE GreenLake for Large Language Models.

The HPE GreenLake For Large Language Models, a form of generative AI, will be powered by HPE Cray XD Supercomputers with NVIDIA H100 GPUs. It will will be available by the end of the calendar year starting in North America and available in Europe early next year.

HPE is providing a wide range of AI Large Language Model services including strategy, design, operations and management of AI workloads, but partners will be able to participate in the full range of AI opportunities, said Neri.

“We are bringing partners to this journey, otherwise there are only 10 people selling it and that is not right,” he said. “We have been and will continue to be a partner led company in everything we do because it gives us complimentary expertise. Obviously it gives us reach in go to market in countries and segments and then ultimately it drives loyalty with our customers…We are making sure the partner ecosystem is also part of this journey.”

Here are Neri’s five boldest statements on AI from the press conference at HPE Discover 2023.

Partners Will Be Able To Participate In AI Market Including With HPE’s New Large Language Model Public Cloud

The moment you are on HPE GreenLake you will be able to resell it (Large Language Models Public Cloud). This is one of the beautiful things about our approach: our broader partner ecosystem which includes solution integrators, ISVs, distributors, value added resellers and alike, generally are not in this space.

But now with AI we give them the ability to focus on these applications as part of the GreenLake catalogue and then basically sell it like they sell anything like private cloud or an Aruba switch. Now they can sell of these (AI) models as part of their engagement with customers.

What they have to do though is either leverage our HPE Services expertise because obviously there are some potential consultative aspects of this or they will invest themselves in building these capabilities which makes them relevant from a business outcome perspective and a technology perspective.

The technology is almost invisible behind the scenes. They just say this is what I want to accomplish, this is the right pre-trained model or if it is a very large customer they can bring the model to our cloud and train it and we will reserve some sort of capacity.

But I would also say this is also a major point of differentiation for us compared to others because it includes our partner ecosystem as part of the solution. We drag them along versus selling direct.

More AI Partnerships To Come With A Vertical Focus

In the future we are going to bring more partners to the equation. But remember LLM is one thing. Then as you go through applications catalogue it will be way more verticalized for the specific verticals whether it is climate, whether it is pharma…whether it is transportation, financial services or other type of verticals.

So we’re going to expand that. We have quite a bit of experience in tuning AI models at scale already for verticals and use cases. In the future we will work with others. We have already talked to a lot of companies. Now that they understand we are going to be in this market they understand they can get access to these (supercomputing public cloud) capabilities in a way that allows them to train, scale and tune in a sustainable way. Ultimately they will be able to have choice and flexibility.

Making AI Supercomputers Available To Everyone

The supercomputer has been available to only a few privileged people. That’s where as I think about these types of technologies one of the requirements of supercomputing is how to make it inclusive and available to all.

One of the challenges that we have with this type of technology is if it is only available to a few then there will be this massive, massive divide.

We saw some of this with the pandemic. Through the pandemic what happened? Everybody had to go home and they had to operate in a digital environment.

Digital led business models were able to thrive, grow and succeed. Those who were not digitally enabled they really suffered.

Now we going to this AI and ML world and those that don’t have AI embedded are going to be left behind.

The other aspect of this is how do we make it inclusive, inclusive meaning it is available so they can go at the speed they need without having all the capital and expertise to do it. Our point is that they have the data, I just need to protect the data, and just bring them the model so they can deploy it for whatever they are trying to accomplish. That could be mid-market customers, small and medium business customers.

HPE’s Big AI Differentiators – An IP Advantage

Number one is the IP (intellectual property) that we have to deliver cloud infrastructure at scale as a capability-based (AI offering) not capacity-based.

Number two is an engineering understanding and the services to build the right sustainable data centers around these types of technologies.

Number three our magic is in what I call two vectors: one is the interconnect fabric that allows us to mix and match different types of GPUs and accelerators based on the type of AI workloads that you need to run versus being very proprietary on one ecosystem

Number four is that we have AI software technologies that allows us to run these at massive scale. I said earlier when you run some of these AI workloads whether you use 400 GPUs, 4,000 GPUS or 40,000 GPUs all 40,000 of them have to work together concurrently and be up and running all the time. If one of them fails they all fail.

So we know how to run this at scale and in order to drive trust which is probably the most important metric related to AI you have to drive completion rates and the only way to drive completion rate is by driving reliability and consistent performance. These are big differentiators.

Also at HPE we have been tuning and training some AI models for a long time. We have been in this space for a long time. Many of the supercomputer systems in the market for climate forecasting are HPE or Cray systems. Many of the systems in life sciences are HPE systems. Obviously in the US Federal Systems business there are lot of Cray and HPE systems. Remember three years ago we announced we were building two large supercomputers that HPE deployed and is operating for the National Security Agency. This is public information. So we have a lot to be proud of.

The question is how do we make it available to all. The message is simple: we are going to make supercomputers available as a cloud offering and we are going to lead all of this with a software subscription model that is easy to consume.

AI Regulation- Be Part Of The Solution

This is something that the private sector and the public sector will have to work together on. So whether it is the understanding of what it does and what it does not or the risk associated with this, we have to be part of the solution.

Don’t be overly concerned about over-regulation because ultimately the government has to protect citizens but at the same time it can’t stop advancement and evolution. So we have to both be part of that conversation and figure out what is the right outcome. As I said two weeks ago I went to Washington DC having some of these conversations.

There has to be some regulation at some point but it has to be the right regulation.