Channel programs News
VMware, Nvidia ‘Reinvent Enterprise Computing’ With vSphere 8
‘We can offload processing of software-defined infrastructure tasks, like network processing, like storage processing, to the DPU. And now you get accelerated I/O. And you can have agility for developers because all of that storage and network processing is now running in the DPU. Now you have the entire cluster of CPUs to run this distributed workload that you talked about, whether it‘s containerized or in a VM. That’s the power of the DPU,’ VMware CEO Raghu Raghuram says.
The goal of Project Monterey – introduced yesterday as vSphere 8 — wasn’t to increase the available compute cycles in data centers, or reduce their power consumption.
But the goal of Project Monterey is to prepare the data center for the huge workloads of the future, according to VMware CEO Raghu Raghuram.
“It’s becoming mission critical to every enterprise. Nobody’s going to talk about five years from now ‘that AI enabled application,’” Raghuram said. “It will just be assumed that machine learning is part of every application.”
“That’s right” said Nvidia CEO Jensen Huang, who was sitting beside him at VMware Explore, where they took the lid off the highly anticipated project in a fireside chat that was two years in the making.
Compute cycles in the modern data center are eaten up by infrastructure tasks that keep the systems running. However, as cloud adoption rises and with it cloud services, the infrastructure tasks have become more demanding, which leaves fewer compute cycles for the workloads that generate revenue.
To overcome this, Raghuram and Huang said their companies worked together to “rearchitect” the data center using a data processing unit, or DPU. The DPU is not a new idea. It’s a board that runs hardware accelerators that perform some functions very efficiently. In the case of Project Monterey, the DPU is taking on all of those infrastructure tasks so the processors in the data center can be turned over to the work it was designed to do.
Due to the compute power needed to run machine-learning programs, AI applications reside in dedicated locations, so data must travel to them to be processed. Raghuram said, no longer.
“Better still, you can run AI along your mainstream workloads, so you get better efficiency, better efficacy, cost management, overall management,” Raghuram said. “It gives customers exactly what they want: the best in AI, managed on the platforms they trust.”
Read the rest of their conversation below: