Nvidia: Equinix Private AI Service Will Help Partners Close DGX System Deals ‘Faster’

The new service is meant for businesses that want privately-owned AI supercomputers but lack the data center infrastructure and expertise to support the systems. Nvidia executive Charlie Boyle tells CRN it will help channel partners ‘make money faster, close business faster and, at the end, deliver more value to their customers.’

Nvidia and Equinix said they have created a solution for businesses that want to quickly set up privately-owned supercomputers to build generative AI applications but lack the data center infrastructure and expertise to support the systems.

Announced and launched on Wednesday, Equinix Private AI with Nvidia DGX is described by the two companies as a “turnkey” solution in which Equinix hosts and manages Nvidia DGX supercomputers purchased by businesses through Nvidia channel partners.

It’s made for businesses that don’t want their data in the public cloud for various reasons, including security, data sovereignty and auditability.

[Related: Analysis: How Nvidia Showed Its True Power In 2023]

Charlie Boyle, vice president of DGX systems at Nvidia, told CRN that the new solution will help partners in the Nvidia Partner Network “make money faster, close business faster and, at the end, deliver more value to their customers” with DGX systems.

“It makes it easier for them to sell and close business. And it makes it much easier for customers to consume and get up and running with AI, so this is going to be tremendous for all of our [Nvidia Partner Network] partners to accelerate business they've been working on for months and getting new business into their pipelines,” he said.

With the fully managed service, the AI chip giant and data center powerhouse are hoping to solve problems they see many businesses facing: the significant amount of time it takes to plan and deploy a cluster of Nvidia DGX systems in a private data center, the lack of proper facilities to house such systems and a lack of personnel to manage them.

“Customers want world-class AI capabilities, but most of them don't have the data center infrastructure, the expertise to build, manage and run those systems,” Boyle said in a briefing.

Jon Lin, executive vice president and general manager of data center services at Equinix, said with his company’s expertise in setting up and managing data centers, the new Private AI service can reduce the lead time for deploying DGX supercomputers “from months to weeks or potentially days.”

What’s Included With Equinix Private AI With Nvidia DGX

Equinix’s Private AI service is focused on Nvidia’s DGX BasePod or SuperPod cluster configurations, the latter of which can range from 128 DGX H100 systems to as many as 2,048 systems, according to Nvidia documentation.

These DGX systems, each of which contain eight H100 GPUs, are connected together using Nvidia’s ultra-low latency InfiniBand networking technology and managed by Equinix’s managed services team of more than 800 employees across the globe.

Another critical component of the service is Nvidia AI Enterprise, the chip designer’s software platform that includes all the building blocks they need to train and run AI models, from the NeMo framework for building large language models to the TensorRT-LLM library for optimizing such models.

Customers of the Private AI service can deploy their DGX clusters at nearly 250 of Equinix’s International Business Exchange data centers worldwide, which includes locations in North America, South America, Europe, Asia and Africa, according to the data center company.

DGX clusters housed within Equinix’s data centers are connected to the outside world through a high-speed private network, and the company also provides high-bandwidth interconnections to cloud services and enterprise service providers.

The service comes with “enterprise-grade” support and security, which includes assistance from Equinix staff on building and deploying custom AI models as well as access to Nvidia experts.

“We're hearing tremendous amounts of energy and vigor from enterprises around wanting to do this but in a way that is not exposing them from either a cybersecurity perspective [or] from an intellectual property leakage perspective, etc. That private infrastructure then becomes a critical path to be able to deliver that,” Lin said.

Private AI Service Ensures Fast DGX Deployments, Not Delivery

While Nvidia and Equinix are pitching the Private AI service as a fast and simple way for businesses to set up privately-owned AI infrastructure, it won’t have an impact on lead times for delivery of DGX systems, according to Boyle.

“All DGX SuperPOD customers get the same lead time from order to shipment of systems, regardless of where the deployment will occur,” he told CRN.

Long lead times for systems containing Nvidia’s H100 GPUs, including DGX, have been a common complaint of OEMs and channel partners for the past year. This has been due to high demand for the processors, which have been popular with untold scores of AI developers, including big names like OpenAI, due to their high-performance capabilities.

Nvidia has been making efforts to increase production of H100s over the past several months to keep up with demand, but partners have told CRN that they still face long lead times.

“You’re just selling a backorder, a place in line,” said an executive at one Nvidia partner, who asked to not be named so that he could speak frankly about business with the chip designer.

An executive at another Nvidia partner said lead times are improving for H100-based systems, but he is still quoting customers four to eight weeks for deliveries “to be conservative.” In some instances, systems are shipping faster, he added.