AI Product Chief On Why HPE Has The Right Strategy To Win A ‘Significant Chunk’ Of Public Cloud AI Market
HPE Chief Product Officer of AI Evan Sparks says the company’s new supercomputer AI public cloud offering, HPE GreenLake for Large Language Models, a form of generative AI, provides ‘significant’ cost savings over public cloud competitors.
AI Product Chief: HPE Will Provide ‘Significant’ Cost Savings In Public Cloud AI Battle
Hewlett Packard Enterprise Chief Product Officer of AI Evan Sparks says the company’s new supercomputer AI public cloud offering, HPE GreenLake for Large Language Models, a form of generative AI, provides “significant” cost savings over public cloud competitors.
The supercomputing-class public cloud capability that HPE is bringing to the table provides “efficiency and reliability” in running AI models that spell big savings for customers compared with public cloud, said Sparks.
“When you are talking about monthlong jobs that use 1,000 GPUs and you save 20 percent, that savings goes directly to the customer, directly to their bottom line, and adds up to significant kinds of dollars,” said Sparks in an interview with CRN. “The other thing is reliability. When you are leasing [public] cloud GPUs, you pay for them whether they work and solve your problem or not. You might get a node that fails partway through and so on and you paid for the time it took to run your job up until that point, even if the end result is no good for you at this point.”
HPE, said Sparks, is providing a “very powerful alternative to the public cloud” for Large Language Models. “We think this is going to be foundational toward that next generation of computing that we are entering right now,” he said.
As to the specific data HPE has seen with regard to customer savings in initial trials of its public cloud AI offering, Sparks said, “We have a program where we do a total cost of ownership approach with our customers in these kinds of settings and the savings are significant.”
The bottom line: HPE’s battle-tested supercomputing and high-performance compute prowess in building reliable AI systems with good output ends up “being way lower” than rival public cloud offerings because “you are not spending your time refining over and over these jobs that fail,” said Sparks.
Sparks, the former CEO of Determined AI, a highly regarded machine learning AI company that was acquired by HPE two years ago, said he has been “surprised” by how fast the Large Language Model AI market has taken off. “One of my first conversations with [HPE Executive Vice President and General Manager of High Performance Computing] Justin Hotard after I joined HPE was around how big a deal I thought LLMs were going to be,” he said. “I think I underestimated it by a factor of 10 or 50. It has been really an incredible growth story over the last couple of years here.”
Sparks said HPE has the right strategy, people and intellectual property to win a “significant chunk” of the public cloud AI land grab.
“This is exciting because we are intersecting that market as it is really becoming massive,” said Sparks. “Don’t get me wrong. It is also going to be a highly competitive space going forward too. There’s going to be a bit of a land grab. We have to be convinced that we have the right strategy, which I am convinced that we do, and that we have the right portfolio of people and intellectual property to really win a significant chunk of this market and I think we can.”