Schneider Electric, Nvidia Form Design Partnership For Massive AI Deployments
‘No single company can deliver AI infrastructure at this scale alone. What Schneider Electric and Nvidia are showing is that when leaders in power, cooling and computing collaborate on solutions from the start, customers win,’ Robert Bunger, director of data center solution architecture at Schneider Electric, tells CRN via email.
Schneider Electric has partnered with Nvidia to speed up the time it takes to deploy the chipmaker’s most advanced AI accelerators in the data center and enterprise.
The companies have co-engineered new reference architecture to install Nvidia’s GB300 NVL72 chips en masse. Schneider Electric is calling it the industry’s first reference architecture that provides seamless OT and IT interoperability with Nvidia Mission Control.
“The bigger picture is partnership. No single company can deliver AI infrastructure at this scale alone,” Robert Bunger, director of data center solution architecture at Schneider Electric, told CRN via email. “What Schneider Electric and Nvidia are showing is that when leaders in power, cooling and computing collaborate on solutions from the start, customers win.”
Bunger said with the speed of AI adoption, the only way to keep pace with the industry is through collaborations such as the one Schneider Electric and Nvidia have formed. He said the reference architecture provides a framework that incorporates power management of complex AI infrastructure as well as liquid cooling control systems.
This reference design is described as a “plug and play” end-to-end control system for advanced AI estates by Schneider Electric and Nvidia.
The controls reference connects edge devices and facility controls for energy management and liquid cooling across Nvidia GB300 NVL72 and Nvidia GB200 NVL72 deployments leveraging Nvidia Mission Control, the companies said in a statement. Using a “plug-and-play” architecture based on the MQTT protocol, it bridges operational technology infrastructure and IT systems, allowing operators to use data from every layer to optimize performance.
Dell Technologies earlier this year said it was the first company to bring the Nvidia GB300 NVL72 to market with CoreWeave, the recipient of the liquid-cooled Dell Integrated Rack Scalable Systems with PowerEdge XE9712 servers featuring 72 Nvidia Blackwell Ultra GPUs and 36 Arm-based Nvidia Grace CPUs.
However, many AI buildouts—such as Microsoft’s nearly completed Wisconsin AI super cluster—are still using GB200s, putting the Schneider Electric and Nvidia partnership at the next stage of data center evolution.
“Schneider Electric’s new reference designs with Nvidia aren’t just technical documents; they’re playbooks that provide a framework for the efficient and responsible scaling of AI factories,” Bunger told CRN. “For the first time, power management and liquid cooling controls are fully integrated within the designs, with seamless interoperability with Nvidia Mission Control. This gives operators one framework to manage power, cooling and workloads in real time, with OT and IT full visibility.”
In addition, Schneider Electric and Nvidia have partnered on a second reference design that is focused on deploying AI infrastructure with power requirements up to 142 kilowatts per rack. This addresses the power needs of the Nvidia GB300 NVL72 in a single data hall, Schneider Electric said.
“That kind of power density is no longer theoretical,” Bunger told CRN. “By validating the power and cooling architecture in advance, across ANSI and IEC standards, and even offering digital twin simulations, Schneider Electric and Nvidia can remove the guesswork so that operators get a tested playbook for deploying highly efficient and high-performing AI clusters.”