Nvidia Vera Rubin: 9 Hardware, Cloud Companies Building Out Ecosystem
Nvidia’s new Vera Rubin GPU platform, unveiled at CES 2026, is drawing strong interest from enterprises and technology partners eager to build next-generation AI infrastructure. CRN looks at nine strategic Nvidia vendor partners looking to build out the Rubin ecosystem.
The new Nvidia Vera Rubin platform introduced by Nvidia at CES 2026 generated interest not only from businesses looking at ways to significantly enhance their AI capabilities but also from technology providers looking to help build the infrastructure to enable those capabilities.
Nvidia used CES to launch its Rubin GPU platform, the highly anticipated follow-up to its fast-selling Blackwell Ultra products. Nvidia said its Rubin platform is in production, and that its technology partners expect their related offerings to be available in the second half of 2026.
Santa Clara, Calif.-based Nvidia plans to initially make Rubin available in two ways: through the Vera Rubin NVL72 rack-scale platform, which connects 72 Rubin GPUs and 36 of its custom, Arm-compatible Vera CPUs, and through the HGX Rubin NVL8 platform, which connects eight Rubin GPUs for servers running on x86-based CPUs.
[Related: The 10 Biggest Nvidia News Stories Of 2025]
Both platforms will be supported by Nvidia’s DGX SuperPod clusters.
The rack-scale platform was originally called Vera Rubin NVL144 when it was revealed at Nvidia’s GTC 2025 event last March, with the 144 number meant to reflect the number of GPU dies in each server rack. But the company eventually decided against this, instead opting to stick with the NVL72 nomenclature used for the Grace Blackwell rack-scale platforms to reflect the number of GPU packages, each of which contain two GPU dies.
Nvidia’s technology partners include a wide range of companies from hardware vendors like Supermicro to storage vendors like Vast Data and DDN and system vendors like Dell Technologies and Lenovo. Those companies all said they plan to introduce high-performance AI systems based on the Nvidia Vera Rubin platform.
Other partners unveiling support for Rubin include cloud providers like CoreWeave, Nebius, Microsoft and Red Hat, which are in the process of expanding their infrastructure to take advantage of the enhanced performance promised by Nvidia’s new platform to significantly expand their AI capabilities.
CRN looks at nine hardware, software and services vendors that are helping build out the Nvidia Vera Rubin ecosystem. Read on for the details.
DDN AI Data Intelligence Platform
DDN, which develops high-performance storage technologies targeting AI data, unveiled a collaboration with Nvidia supporting DDN’s next-generation AI factory architecture. The DDN AI Data Intelligence Platform, which combines the company’s EXAScaler platform for high-performance SI training and high-throughput workloads and its Infinia software-defined data platform built for AI inference, RAG, data preparation and metadata-heavy workloads, will be powered by Nvidia Rubin and Nvidia BlueField-4 DPUs.
Working with Nvidia Rubin, DDN looks to help enterprises and hyperscalers operationalize large-scale AI performance by eliminating data bottlenecks. DDN said the unified stack provides up to 99 percent GPU utilization across large-scale AI environment while reducing the time to first token, or TFFT, by 20 percent to 40 percent.
DDN said it is a certified storage technology for the Nvidia DGX SuperPOD, and that the company already powers over 1 million GPUs worldwide for AI and high-performance computing environments.
Vast Data AI Operating System
Vast Data, which builds storage technology targeting AI applications, unveiled a new inference architecture that enables the Nvidia Inference Context Memory Storage Platform aimed at deploying agentic AI. Vast Data said its platform is a new class of AI-native storage infrastructure for gigascale inference built on Nvidia BlueField-4 DPUs and Nvidia Spectrum-X Ethernet networking.
The new platform runs Vast Data’s AI Operating System software natively on Nvidia BlueField-4 DPUs to move critical data services directly into the GPU server so inference executes in a dedicated data node architecture, which the company said removes unnecessary copies of the data to reduce TTTF. This is combined with Vast Data’s parallel DASE (Disaggregated Shared-Everything) architecture so that each host can access a shared global context namespace without the need for coordinating data requests, providing a streamlined path from the GPU memory to persistent NVMe storage.
Supermicro Data Center Building Block Offerings
Data center infrastructure vendor and white-box server and storage manufacturer Supermicro unveiled plans to enable first-to-market delivery of data center-scale offerings optimized for the Nvidia Vera Rubin and Rubin platforms with its deployment of the flagship Nvidia Vera Rubin NVL72 and Nvidia HGX Rubin NVL8 systems. The systems will be part of Supermicro’s Data Center Building Block Solutions (DCBBS) approach to streamlining production while providing extensive customization options and fast time-to-deployment.
Supermicro is offering:
- Nvidia Vera Rubin NVL72 SuperCluster rack-scale systems that it says can deliver 3.6 exaflops of NVFP4 performance.
- 2U liquid-cooled Nvidia HGX Rubin VNL8 8-GPU systems optimized for AI and high-performance computing workloads.
Lenovo AI Cloud Gigafactory
Lenovo unveiled the Lenovo AI Cloud Gigafactory with Nvidia to expand on the partnership the two have for accelerating hybrid AI adoption across personal, enterprise and public AI platforms. The aim is to provide AI cloud providers with the ability to achieve TTFT in weeks by quickly deploying gigawatt-scale AI factories using ready-to-use components, expert guidance and industrialized build processes.
In addition to taking advantage of the Nvidia Blackwell Ultra high-performance architecture that uses Lenovo’s Nvidia GB300 NVL72 system and a liquid-cooled rack-scale architecture integrating 72 Nvidia Blackwell Ultra GPUs and 30 Nvidia Grace CPUs, the Lenovo AI Cloud Gigafactory with Nvidia supports the Nvidia Vera Rubin NVL72 system for AI training and inference.
Dell Technologies With Nvidia Rubin
Dell said it will support the Nvidia Rubin platform with its Dell AI Factory, which is aimed at bringing AI to businesses. The company plans to introduce new PowerEdge servers featuring the Nvidia Vera Rubin NVL72 platform, promising to deliver 3.6 exaflops of AI performance with 75 TB of fast memory and advanced resiliency capabilities.
The new platforms will support Nvidia Vera Arm-based CPUs with 88 Custom Olympus cores providing 176 threads with Nvidia spatial multithreading capabilities and 1.2 TB-per-second memory bandwidth. They are aimed at data movement engines for agentic AI applications, the company said.
Dell is also expanding its PowerEdge line with support for Nvidia HGX Rubin NVL8 configurations it says will deliver about 400 petaflops of AI performance with 2.3 TB of HBM4 memory, 176-TB-per-second memory bandwidth, and 800-Gbps Nvidia ConnectX-9 SuperNICs and Nvidia BlueField-4 DPUs.
CoreWeave To Add Nvidia Rubin
CoreWeave, the developer of an AI cloud, said it will add Nvidia’s Rubin technology to its AI cloud platform to help expand the range of options for customers looking to build and deploy agentic AI, reasoning and large-scale inference workloads. CoreWeave said it expects to be among the first cloud providers to deploy the Nvidia Rubin platform in the second half of 2026.
“The Nvidia Rubin platform represents an important advancement as AI evolves toward more sophisticated reasoning and agentic use cases,” said Michael Intrator, CoreWeave’s co-founder, chairman and CEO, in a statement. “Enterprises come to CoreWeave for real choice and the ability to run complex workloads reliably at production scale. With CoreWeave Mission Control as our operating standard, we can bring new technologies like Rubin to market quickly and enable our customers to deploy their innovations at scale with confidence.”
Nebius Looks To Add Nvidia Rubin
Nebius said at CES that it plans to deploy the Nvidia Rubin platform through its Nebius AI Cloud and Nebius Token Factory to help unlock next-generation reasoning and agentic AI capabilities for customers starting in the second half of 2026.
Nebius, a Nvidia Cloud Partner, expects to be among the first AI cloud providers to offer Nvidia Vera Rubin NVL72. The company plans to integrate Vera Rubin NVL72 across its full-stack infrastructure at data centers in the U.S. and Europe to help customers build next-generation AI applications with regional availability and control.
Nebius founder and CEO Arkady Volozh said in a statement, “We are proud to be one of the first on the market to offer Vera Rubin GPUs as we fuel the next wave of AI innovation. By integrating Vera Rubin into Nebius AI Cloud and our inference platform Nebius Token Factory, we’re giving AI innovators and enterprises the infrastructure they need to develop agentic and reasoning AI systems faster and more efficiently.”
Microsoft Plans Large-Scale Nvidia Rubin Deployments
Microsoft President of Azure Hardware Systems and Infrastructure Rani Borkar used a blog post to unveil his company’s plans for deploying the Nvidia Rubin platform with Azure.
“Microsoft’s long-range datacenter strategy was engineered for moments exactly like this, where Nvidia’s next-generation systems slot directly into infrastructure that has anticipated their power, thermal, memory, and networking requirements years ahead of the industry. Our long-term collaboration with Nvidia ensures Rubin fits directly into Azure’s forward platform design,” Borkar wrote.
Azure’s architecture already incorporates the core architectural assumptions required for deploying Nvidia Rubin, according to Borkar’s blog post:
- Azure’s rack architecture was already designed to support the sixth-generation Nvidia NVLink fabric needed to reach about the 260 TBps required by Vera Rubin NVL72.
- Azure’s network infrastructure was purpose-built to support large-scale AI workloads, and its Nvidia ConnectX-9 networking supports the Rubin AI infrastructure.
- Azure has the cooling, power envelopes and rack geometries to handle the higher thermal windows and higher rack densities needed for the Rubin memory stack.
- Azure has already integrated and validated memory extension behaviors to work with the Rubin Superchips’ new SOCAMM2 memory expansion architecture.
- Azure’s supply chain, mechanical design and orchestration layers have been pre-tuned for Rubin’s massively larger GPU footprints and multi-die layouts, Borkar wrote.
Red Hat To Pair Enterprise Open Source With Nvidia Vera Rubin
Red Hat said at CES that it intends to deliver a complete AI stack optimized for the Nvidia Vera Rubin platform with Red Hat Enterprise Linux, Red Hat OpenShift and Red Hat AI.
As the IT industry moves beyond individual servers toward unified, high-density systems, Red Hat said it plans to help start this transformation with the introduction of Red Hat Enterprise Linux for Nvidia, a specialized edition of the company’s enterprise Linux platform optimized for the Nvidia Rubin platform and tuned to drive future production on Red Hat OpenShift and Red Hat AI.
Red Hat Enterprise Linux for Nvidia will support the platform features of the latest Nvidia architectures on Day zero of availability, starting with the Nvidia Rubin platform, Red Hat said.
Red Hat Enterprise Linux for Nvidia will be fully aligned with the main build of the operating system so that any improvements in Red Hat Enterprise Linux for Nvidia will be incorporated into Red Hat Enterprise Linux to help customers easily transition to the traditional Red Hat Enterprise Linux as needed, the company said.
Dylan Martin contributed to this story.