VMware, Intel Seek To Help Businesses Build And Run AI Models On Existing Servers

The main purpose of VMware Private AI with Intel is to enable the virtualization giant’s customers to use existing Intel-based infrastructure and open-source software to ‘simplify building and deploying AI models’ with an emphasis on ‘practical privacy and compliance needs,’ the company says.

ARTICLE TITLE HERE

VMware and Intel said they are collaborating to help businesses adopt privacy-minded AI solutions faster by eliminating the guesswork to make the solutions run well on existing infrastructure.

At this week’s VMware Explore 2023 event in Barcelona, the virtualization giant said it has teamed with Intel to develop a validated reference architecture called VMware Private AI with Intel, which consists of VMware Cloud Foundation and its AI computing features as well as Intel’s Xeon CPUs, Max Series GPUs and AI software suite.

[Related: Gelsinger: Intel Is Prioritizing AI Sales Enablement For Partners]

id
unit-1659132512259
type
Sponsored post

The reference architecture is set for release by next month, and it will be supported by servers from Dell Technologies, Hewlett Packard Enterprise and Lenovo running fourth-generation Intel Xeon CPUs and Intel Max Series GPUs.

Chris Wolf, vice president of VMware AI Labs, said in a statement to CRN that the reference architecture will create new opportunities for VMware and Intel’s joint partners.

“Our broad and growing ecosystem of AI apps and services, MLOps tools, AI hardware and data services is creating considerable optionality by which our joint partners can customize and differentiate,” he said.

The reference architecture is an alternative to the VMware Private AI Foundation with Nvidia platform, which was unveiled in August and enables businesses to develop and run AI models on Dell, HPE and Lenovo servers powered by Nvidia GPUs, DPUs and SmartNICs.

Intel is keen to challenge Nvidia’s dominant position in the AI computing space with not just GPUs but also CPUs with AI acceleration capabilities such as Advanced Matrix Extensions. Tuning its hardware and software to run AI workloads well on VMware’s multi-cloud platform is an important step in giving the semiconductor giant a better fighting chance as it ramps up competition in silicon.

“With the potential of artificial intelligence to unlock powerful new possibilities and improve the life of every person on the planet, Intel and VMware are well equipped to lead enterprises into this new era of AI, powered by silicon and software,” said Sandra Rivera, the outgoing executive vice president and general manager of Intel’s Data Center and AI Group, in a statement.

Enabling AI Work With Emphasis On Privacy, Compliance

The main purpose of VMware Private AI with Intel is to enable the virtualization giant’s customers to use existing Intel-based infrastructure and open-source software to “simplify building and deploying AI models” with an emphasis on “practical privacy and compliance needs,” according to VMware.

This applies to infrastructure wherever “enterprise data is being created, processed and consumed, whether in a public cloud, enterprise data center or at the edge,” the company said.

By tapping into existing infrastructure, businesses can reduce total cost of ownership and address concerns of environmental sustainability, it added.

“When it comes to AI, there is no longer any reason to debate trade-offs in choice, privacy and control. Private AI empowers customers with all three, enabling them to accelerate AI adoption while future-proofing their AI infrastructure,” Wolf said.

The AI computing reference architecture covers the crucial steps of building and running AI models, from data preparation and model training to fine-tuning and inferencing. The use cases are wide open, from accelerating scientific discovery to enriching business and consumer services.

“VMware Private AI with Intel will help our mutual customers dramatically increase worker productivity, ignite transformation across major business functions and drive economic impact,” Wolf added.

Intel’s AI software suite consists of “end-to-end open-source software and optional licensing components to enable developers to run full AI pipeline workflows,” according to VMware. This includes Intel’s oneAPI framework for letting developers writing code once to target software for multiple types of processors as well as Intel’s Transformer Extensions and PyTorch Extensions.

VMware Cloud Foundation provides complementary features for building and running AI models, such as vSAN Express Storage Architecture for accelerating capabilities such as encryption, vSphere Distributed Resources Scheduler for maximizing hardware utilization for AI models and training, and VMware NSX for micro-segmentation and threat protection capabilities.

The multi-cloud platform also comes with secure boot and Virtual Trusted Platform Module features for enabling model and data confidentiality.