CEO Antonio Neri On ‘Massive Growth’ From AI Builders And The Difference Between HPE’s AI Public Cloud And Microsoft’s AI Strategy

HPE President and CEO Antonio Neri said partners will soon be able to start selling instances of HPE’s AI public cloud, GreenLake for Large Language Models, which will be available in North America in the first quarter next year.

The HPE GreenLake for LLM public cloud is being built in partnership with German AI startup Aleph Alpha, which HPE recently invested in as part of its public cloud AI buildout.

“We are really proud of the partnership with Aleph Alpha,” said Neri in an interview with CRN. “We started that journey early on. What we bring to customers is the ability to consume this [generative AI] model in a virtual private cloud with a ready-to-deploy LLM that actually was architected to support six different languages in a more generic type of LLM.”

The HPE LLM model opens the door for customers to tune their AI models in a virtual private cloud versus Microsoft’s AI strategy, which is focused on “integrating generative AI in their applications with Copilot, which makes sense because they have the applications,” said Neri.

“For us it is about helping customers to either build the model or train the model,” he said. “In this case, they don’t need to build an LLM—they can use Aleph Alpha and they can just fine-tune the Aleph Alpha model with our computing power and our expertise with their data for whatever business transformation they are driving.”

Neri said it is critical that HPE partners build services capabilities to help customers deploy AI solutions. “Their expertise needs to be around the models and the software and services,” he said. “We can bring to bear all the rest of the solutions that they need so they can deploy quickly for customers.”

As HPE rolls out its AI public cloud, partners will be able to sell “capacity on those public instances for training or tuning models,” said Neri. “That is key. That will be available in North America in the first calendar quarter of 2024. We are gated by supply availability.”

How do you feel about the Fiscal Year 2023 financial results?

For the year it is fair to say that we delivered impressive business and financial results.

Our steady execution resulted in higher revenue, further margin expansion, a larger operating profit, and record-breaking non- GAAP diluted net earnings per share and free cash flow, which was the highest in the company’s history.

We delivered extraordinary innovation to customers. That is across our entire portfolio, which is making HPE more relevant than ever.

Because of the progress we made this year with our financial performance and our innovation and the guidance we provided at our SAM [Securities Analyst Meeting] and the line of sight we have in front of us, we are raising our dividend in Fiscal Year 2024 by 8 percent, or a penny above what we had before.

What are some of the highlights from the full fiscal year?

We saw healthy sustained revenue growth to $29.1 billion. That is 5.5 percent [in constant currency], which is above the midpoint of 5 percent that we guided to as of Oct. 19.

We expanded gross margins, which exceeded 35 percent, and are up on an annual basis by 130 basis points. When you compare the gross margin of 2023 to the gross margin we had in 2018, we actually have improved our gross margin by 500 basis points since I became CEO of the company.

We closed the year with more than $1.3 billion in [GreenLake] ARR [annualized revenue run rate)]. That is up 39 percent year over year.

Our non-GAAP operating margin was up 20 basis points to 10.8 percent. Our non-GAAP diluted earnings per share was $2.15, which is the top end of the guidance we provided on Oct. 19. That is up 6.5 percent year over year. That is record-breaking for the company. When I became CEO, it was 96 cents in 2018. Now it is $2.15.

One area that I am really, really proud of is our free cash flow performance. That was always something that bothered me because we had unique situations from the past that we had to take care of. This year was the first clean year of free cash flow where we actually exceeded our commitment, which was $1.9 [billion] to $2.1 billion.

So we delivered $2.2 billion of free cash flow, which is up $400 million year over year, or 25 percent. The reason that happened is because we had better earnings to begin with. We had better net income conversions to free cash flow and we managed our working capital and expenses in a very, very disciplined manner.

HPE had an impressive year, a very important year and we delivered a lot of value to shareholders with a tremendous amount of innovation for our customers and partners.

Talk about the performance of the business units including Intelligent Edge and HPC and AI.

Intelligent Edge was rampant for us. We grew 40 percent in the quarter. It now represents 18 percent of the total company revenue for the full fiscal year, 39 percent of the total company profit. On a year-over-year basis that business, because of the margins we are driving through software and the scale of our subscription model, we improved our operating profit margin by 1,600 basis points. That is remarkable. We gained share in the key segments. This is a business where we added more than $2 billion in revenue in the last two years. So that’s a growth engine for us. We added more assets throughout the year with Axis Security, private 5G and two years ago Silver Peak. We expect those to continue to be growth drivers for us.

With GreenLake, we now have 29,000 customers on the platform, up from 27,000 a quarter ago. The total contract value of the as-a- service business now exceeds $13 billion. Last quarter, it was $12 billion.
What are you seeing in terms of AI adoption?

AI is booming. It is exploding. When you think of AI, you have to think of it in three different segments: HPC [high-performance compute], which is steady with simulation and modeling; supercomputing, which is now key for every government and academia to do large amounts of AI for research in biology, life sciences and the like. And then you have AI, which you have to look at in terms of the full life cycle from training to tuning to inferencing.

What we see in the training side is massive growth from customers who are the model builders. The model builders are the companies building generative AI foundation models like OpenAI, Crusoe, Northern Data Group and Taiga Cloud. Many of these innovative companies are using generative AI to advance some of their models and they need massive amounts of computational power, which is a supercomputer. That is why they come to HPE: to build these large AI native clouds.

HPE has the [AI] expertise. We build it and we help them run it. There we have made a number of announcements including at Supercomputer 2023 with Nvidia for generative AI using their IP and our IP in a combined solution. We announced generative AI with the Nvidia Grace Hopper [GH200 Superchip configuration] with our silicon.

What is the significance of the expanded AI enterprise collaboration with Nvidia?

Think about it as an enterprise solution in a box that you can deploy on-premises to fine-tune some of these [AI] models that you are not going to build but you are going to leverage with your data.

Data is the king. You don’t want to put that data in the public domain. So we allow customers to deploy these [AI] models at scale on-premises in a simple offer that they can deploy and consume.

Then there is AI inferencing. Once you train and tune your models and you are ready to do the inferencing, [this] is where the business transformation and the power of real-time processing happens to make decisions faster.

So [AI] training and model building is exploding. Enterprise fine-tuning is starting to grow and AI inferencing—we are at the beginning. That is why we saw sequential improvement in the traditional compute business. We are already seeing the mix increasing on APUs [accelerated processing units], which gives us encouragement about standardization and growth over time.

On the demand for training for AI specifically, at the beginning of the year we had less than $100 million in orders for AI. On a cumulative basis, we ended the 2023 [fiscal year] with $2.4 billion in AI orders.

On top of that you have our supercomputing, which is a different segment. There we booked $1.2 billion. So for the full year between supercomputing and AI, we booked $3.6 billion. Obviously, the mix has shifted to APUs. There is significant growth and the pipeline is massive.

How did partners perform during the quarter?

Our compute business in the [fourth] quarter grew 8 percent sequentially [through partners].

Our HPC and AI infrastructure and services grew 37 percent quarter over quarter through partners.

Storage grew 11 percent sequentially [through partners].

Intelligent edge was slightly down [sequentially] but up 51 percent year over year.

Our tier two and tier three, which is the traditional commodity business, grew 1 percent.

For GreenLake, where we have had 12 consecutive quarters of growth, we grew triple digits year over year to 113 percent. We are very pleased with that. Remember, our north star for the strategy is HPE GreenLake where we deliver experiences at the edge, hybrid cloud from edge to cloud and then obviously AI for data-driven decision-making with now generative AI being the most intensive data-driven workloads. The partners are coming along as they make the decisions on where to play and how to win.

What does the AI boom mean for partners, and what do they have to do to be successful working with HPE?

For the partners it is important to build the capabilities around the services side focused on how to deploy these models. Their expertise needs to be around the models and the software and services. We can bring to bear all the rest of the solutions that they need so they can deploy quickly for customers.

Ultimately, as we stand up our public instances of our HPE GreenLake LLM, which we announced at HPE Discover, they can go sell capacity on those public instances for training or tuning models. That is key. That will be available in North America in the first calendar quarter of 2024. We are gated by supply availability. We are building the site with the power and cooling. It takes time.

What is the difference between the way HPE is approaching its public cloud LLM solution with its partnership and investment in Aleph Alpha versus the Microsoft approach with ChatGPT?

We are really proud of the partnership with Aleph Alpha. We started that journey early on. What we bring to customers is the ability to consume this [GenAI] model in a virtual private cloud with a ready-to-deploy LLM that actually was architected to support six different languages in a more generic type of LLM so customers can tune that model to their needs in one of our public instances versus what Microsoft is doing, which is obviously integrating generative AI in their applications with Copilot, which makes sense because they have the applications.

For us it is about helping customers to either build the model or train the model. In this case, they don’t need to build an LLM—they can use Aleph Alpha and they can just fine-tune the Aleph Alpha model with our computing power and our expertise with their data for whatever business transformation they are driving.