7 Big Nvidia Products, Partnerships And Takeaways From GTC DC 2025

At Nvidia’s GTC DC 2025 event, the AI infrastructure giant demonstrated its ever-expanding dominance through new partnerships with companies like Palantir and CrowdStrike while Nvidia CEO Jensen Huang touted his alignment with President Trump on U.S. manufacturing.

While Nvidia’s GTC DC event last week was short on major product reveals, the AI infrastructure giant used the conference to demonstrate its ever-expanding dominance through a slew of new partnerships with large and small companies.

The Santa Clara, Calif.-based company also provided a big, new financial disclosure reflecting the amped up demand for its latest Blackwell products and its upcoming Rubin GPU launch.

[Related: Arm Data Center Leader: ‘No Doubt About Long-Run Need’ For Massive AI Buildout]

Making his first keynote appearance at the Washington, D.C. offshoot of his company’s flagship GTC event, Nvidia CEO Jensen Huang emphasized how Nvidia is aligned with President Trump on the “reindustrialization” of the United States to the benefit of his company and others participating in the AI data center build-out.

“And this is another area where our administration, President Trump, deserves enormous credit: his pro-energy initiative, his recognition that this industry needs energy to grow, it needs energy to advance, and we need energy to win. His recognition of that and putting the weight of the nation behind pro-energy growth completely changed the game,” he said.

“If this didn't happen, we could have been in a bad situation, and I want to thank President Trump for that,” Huang added in his Tuesday keynote.

What follows are seven big Nvidia products, partnerships and takeaways from the company’s GTC DC event this week, ranging from a $1 billion investment in Nokia, Huang’s disclosure about Blackwell demand, how Nvidia and partner companies are helping the U.S. “bring manufacturing back” as well as new partnerships with Oracle, HPE and Palantir.

Additional reporting by Kyle Alspach.

.

Huang: Blackwell, Ruben GPU Revenue Will Reach $500B Next Year

Huang said Nvidia is expected to make $500 billion of cumulative revenue from its Blackwell and Ruben GPU platforms through next year.

This is based on the company’s expectation that it will sell 20 million Blackwell and Rubin GPUs by the end of 2026 after shipping six million Blackwell GPUs so far this year, according to an Nvidia presentation that accompanied Huang’s on-stage disclosure.

This figure doesn’t include revenue from China, where sales of high-end Nvidia GPUs are halted due to U.S. export controls, according to Huang.

“This is just the West,” he said.

Huang said this “visibility” to the $500 billion in revenue Nvidia is expected to make through next year is unprecedented for any tech company, adding that it would represent five times the growth rate of Nvidia’s last-generation Hopper GPU. The company generated $100 billion from four billion Hopper GPU shipments between 2023 and 2025.

“We have now reached our virtuous cycle, our inflection point, and this is quite extraordinary,” Huang said before disclosing the $500 billion figure.

When Nvidia announced in February that it made $11 billion from the first three months of Blackwell GPU shipments, the company called the GPU its “fastest product ramp” yet. The company started shipping the next-generation GPU, Blackwell Ultra, over the summer.

The company is now preparing to launch Blackwell Ultra’s successor, Rubin, next year for multiple platforms, including the Vera Rubin NVL144 rack-scale platform that is slated to connect 144 Rubin GPUs within a single server rack.

This is part of Nvidia’s annual release cadence for data center platforms that the company disclosed all the way back in 2023.

“Every single year, we are going to come up with the most extreme co-design system so that we can keep driving up performance and keep driving down the token generation cost,” Huang said, referring to text, images and other kinds of data generated by AI models.

Huang Heralds Return Of Manufacturing To US For Nvidia And Partners

During his keynote, Huang emphasized how Nvidia and partner companies are combining their capabilities to “bring back manufacturing” to the United States.

One of Huang’s examples is how Nvidia is using TSMC’s new chip manufacturing facility in Arizona to fabricate and assemble Blackwell chips, which are then assembled along with other components onto larger packages at a Foxconn facility in Texas.

“Robots will work around the clock to pick and place over 10,000 components onto the Grace Blackwell PCB,” his voice narrated over a video clip demonstrating the work.

The Nvidia CEO also highlighted how high-bandwidth memory chips used for the company’s Blackwell systems are manufactured in Indiana.

“From silicon in Arizona and Indiana to systems in Texas, Blackwell and future Nvidia AI factory generations will be built In America, writing a new chapter in American history and industry,” Huang said in the same clip.

In the keynote, Huang said Nvidia’s investments in building a U.S. supply chain for its products—many of which have traditionally been fabricated in Asia and elsewhere—are aligned with Trump’s goal of bringing manufacturing back to the country.

“The first thing that President Trump asked me for is bring manufacturing back. Bring manufacturing back because it's necessary for national security. Bring manufacturing back because we want the jobs. We want that part of the economy. And nine months later, nine months later, we are now manufacturing in full production: Blackwell in Arizona,” he said.

On top of investing its own supply chain of manufacturers in the U.S., Nvidia highlighted how it’s helping other companies to, in its words, “help overcome labor shortages drive American reindustrialization.”

The company said manufacturers, industrial software developers and robotics companies will accomplish this using its Nvidia Omniverse technologies to “build state-of-the-art robotic factories and new autonomous collaborative robots.”

These technologies include an expanded version of its “Mega” Omniverse Blueprint that will help Germany industrial giant Siemens build large-scale “digital twins of factories that bring together realistic 3D models with live operational data,” according to Nvidia.

The company also noted how U.S. networking equipment manufacturing Belden is using Accenture’s Physical AI Orchestrator platform, which incorporates Omniverse software libraries, the Nvidia Metropolis vision AI platform and in-house AI agents to “create virtual safety fences for instant hazardous zone monitoring and real-time quality-inspection systems in factories and warehouses.”

Other companies touted for using Nvidia technologies to support manufacturing include construction giant Caterpillar, Lucid Motors and Toyota.

Nvidia To Invest $1B In Nokia In Major AI Telecom Platform Push

Nvidia announced that it plans to invest $1 billion into Nokia as part of a major push by the company to expand in the telecom industry with a new AI platform.

Disclosed during Nvidia’s GTC DC event, the AI infrastructure giant said the investment is part of a new strategic partnership that will see Nokia adopt its newly announced Aerial RAN Computer to aid with the telecom industry’s transition to 6G cellular networks.

Nvidia pitched the telecom initiative as a way to help mobile operators “improve performance and efficiency as well as enhance network experiences” for AI applications that now drive massive amounts of web traffic.

The first carrier in support of the partnership is T-Mobile U.S., which will work with Nvidia and Nokia to “drive and test” AI-powered radio access network (RAN) technologies “as part of the 6G innovation and development process.” Trials are slated to begin next year.

At the foundation of Nokia’s so-called AI-RAN solution will be Nvidia’s Aerial RAN Computer Pro—shortened as ARC-Pro—which the AI infrastructure giant is billing as a reference design for accelerated computing systems that can aid with the telecom industry’s move from “5G-Advanced to 6G through software upgrades.”

Nvidia said the ARC-Pro reference design will enable manufacturers and network equipment providers to use commercial off-the-shelf or proprietary products to build AI-RAN products in support of “new buildouts and expansions to existing base stations.”

The first OEM promoted for this AI-RAN push is Dell Technologies, whose PowerEdge servers will be used to drive innovation in Nokia’s solution, according to Nvidia.

Nvidia Reveals BlueField-4 DPU, Packed With 64-Core Grace CPU

Nvidia said that it plans to integrate its Grace CPU and ConnectX-9 SuperNIC into the next-generation BlueField-4 DPU to bring 800 Gbps of network throughput to future AI data centers for high-performance inferencing.

Designed to offload and accelerate networking, storage and security workloads from the server’s host CPU, BlueField-4 is set to debut in Nvidia’s Vera Rubin rack-scale platforms next year. The DPU is expected to become available for other server platforms.

In a briefing with journalists and analysts on Monday, Dion Harris, a senior director at Nvidia, said BlueField-4 is “designed to power the operating system of AI factories” and will deliver six times more compute compared to BlueField-3, which became generally available in 2023 and offers 400 Gbps of network throughput.

Nvidia said BlueField DPUs have been “widely adopted” by AI infrastructure and cybersecurity companies thanks to their support of the product line’s underlying DOCA software framework. These vendors include Cisco Systems, DDN, Dell Technologies, Hewlett Packard Enterprise, IBM, Lenovo, Supermicro, Vast Data and Weka on the server and storage side.

BlueField-4’s major gain in compute performance is made possible by its 64-core Grace CPU, which relies on the server-grade Arm Neoverse V2 microarchitecture. The BlueField-3, on the other hand, features up to 16 CPU cores based on Arm’s Cortex-A78 microarchitecture that is typically marketed for smartphones and other mobile devices.

On the networking side, the BlueField-4 takes advantage of Nvidia’s upcoming ConnectX-9 Spectrum-X SuperNIC, which Harris said pushes “the boundaries of AI networking scale and functionality” by providing 1.6 Tbps per GPU of network throughput.

Nvidia Teams Up With Oracle, HPE TO Build AI Systems For DOE

Nvidia said it’s working with Oracle and HPE to build seven new AI systems for U.S. Department of Energy, including what it is calling the agency’s “largest AI supercomputer.”

Among the two AI supercomputers Nvidia is building with Oracle, the larger one will be called Solstice and pack 100,000 interconnected Nvidia Blackwell GPUs to “drive technological leadership across U.S. security, science and energy applications,” according to the AI infrastructure giant. Nvidia did not say when Solstice will go online.

The second system being built with Oracle, called Equinix, will feature 10,000 interconnected Blackwell GPUs and is slated to come online in the first half of next year.

Both systems are expected to provide a total of 2,200 exaflops of AI performance, said Nvidia, which did not specify the numerical format for the performance metric.

The two systems are being built for the DOE’s Argonne National Laboratory, whose scientists and researchers are expected to use Nvidia’s Megatron-Core of open AI models and the company’s TensorRT inference software stack to build AI agents.

Nvidia said Argonne is also deploying three “powerful” Nvidia-based systems—called Tara, Minerva and Janus—to “expand access to AI-driven computing for researchers across the country.” Specifications for these systems were not disclosed.

Solution provider powerhouse World Wide Technology is supporting the Tara, Minerva and Janus systems, according to DOE.

The DOE’s Los Alamos National Laboratory, on the other hand, plans to use Nvidia’s Vera Rubin rack-scale platform and Quantum-X800 InfiniBand networking fabric for its next-generation Mission and Vision systems, according to Nvidia.

Nvidia, CrowdStrike Deepen Partnership For Agentic AI, Edge AI

Nvidia and CrowdStrike announced an expanded partnership that will see the technology giants utilizing each other’s capabilities more deeply for the creation and deployment of AI agents.

The expanded partnership was revealed after the two companies in September announced an integration between Charlotte AI AgentWorks—CrowdStrike’s no-code platform for building, testing, deploying and orchestrating security agents—and Nvidia Nemotron, a family of open AI models, datasets and technologies.

Now, with the help of Nvidia’s NeMo Data Designer, CrowdStrike is “building deeper agents based on our unique, proprietary data,” said Daniel Bernard, chief business officer at CrowdStrike, in a briefing with journalists.

Examples of where this makes a difference include Falcon Complete, the vendor’s managed detection and response (MDR) platform, Bernard said.

The capabilities allow CrowdStrike to harness the experience of Falcon Complete analysts and transform that expertise into datasets, he said. The datasets, then, can be turned into a AI models — which in turn can ultimately “create agents based on the whole composition and experience that we built up within the company,” Bernard said.

With the help of the NeMo Data Designer from Nvidia, CrowdStrike is “building deeper, smarter agents — and doing so faster,” he said.

CrowdStrike is also working with Nvidia technologies to accelerate its efforts on bringing AI to edge computing environments, the companies said.

The security vendor is also working with Nvidia’s NeMo Agent Toolkit to “create composable AI that lives anywhere, specifically agents on the edge and AI on the edge,” Bernard said.

Using the NeMo toolkit, CrowdStrike is gaining the ability to have the Falcon platform operate effectively at the edge, which “makes cybersecurity faster and scales it even better,” he said.

Nvidia Announces ‘First-Of-Its-Kind’ Integration With Palantir

Nvidia said that it is working with Palantir Technologies to develop what it’s calling a “first-of-its-kind integrated technology stack for operational AI” with the goal of accelerating and optimizing “complex enterprise and government systems.”

Palantir plans to integrate into its Nvidia’s GPU-accelerated data processing and route optimization libraries, open models and accelerated computing into the Palantir Ontology software that is at the foundation of its Palantir AI Platform.

This will result in customers receiving the “advanced, context-aware reasoning” capabilities they need to “power domain-specific automations and AI agents for the sophisticated environments of retailers, health care providers, financial services and the public sector,” according to Nvidia.

One of the first companies that plan to take advantage of the integrated technology stack is retail giant Lowe’s, which will use the combined capabilities to create a “digital replica of its global supply chain network to enable dynamic and continuous AI optimization.”