The 10 Biggest Nvidia News Stories Of 2025
The events of 2025 further elevated Nvidia’s role on the global stage, with its CEO and founder, Jensen Huang, becoming a major geopolitical player as the company continued to play a central role in the ongoing AI infrastructure build-out in the United States and elsewhere.
While last year cemented Nvidia’s outsized role as the biggest, most critical provider of AI infrastructure, the events of 2025 further elevated the company’s role on a global stage, with its CEO and founder, Jensen Huang, becoming a major geopolitical player.
As Huang became a close ally to President Donald Trump this year to lobby against restrictions on GPU exports to China, Nvidia’s chips have “given the president a powerful bargaining lever with a variety of countries”—even playing a role in the White House’s “peace mediations between several nations,” The New York Times reported in October.
[Related: The 10 Biggest Intel News Stories Of 2025]
Recognizing the importance of Nvidia to U.S. interests, Huang started referring to the company’s portfolio of GPUs, systems and other products as being part of the “American tech stack” that is key to the country’s technological and economic leadership.
“We just have to keep advocating the sensibility and importance of American tech companies to be able to lead and win the AI race and help make the American tech stack the global standard,” Huang said in the company’s second-quarter earnings call in August when discussing his view that the U.S. should allow advanced GPU exports to China.
This happened as the Santa Clara, Calif.-based company continued to play a central role in the ongoing AI infrastructure build-out in the United States and elsewhere, shipping tens of billions of dollars of GPU platforms and announcing even-more powerful AI systems set to come out in quick succession in the following years.
Taking advantage of its increasingly high free cash flow, Nvidia went on a major investment spree this year, committing $100 billion to OpenAI and announcing plans to pour billions of dollars into Intel and Nokia among dozens of other companies it infused with capital.
Some of these agreements, including the ones with OpenAI and its rival Anthropic, are part of so-called circular deals by large and influential AI companies, where funds often flow back and forth between two or more parties, with suppliers in some cases investing in customers in exchange for buying their products.
These kinds of arrangements have fueled concerns by critics and observers that the massive amounts of money companies are spending on AI infrastructure amounts to a bubble similar to the dot-com boom that led to the stock market crash in 2000.
During Nvidia’s latest earnings call in November, Huang pushed strongly against this idea, saying that the market will be sustained by the industry’s ongoing shift to accelerated computing and its embrace of generative, agentic and physical AI.
“As you consider infrastructure investments, consider these three fundamental dynamics. Each will contribute to infrastructure growth in the coming years,” he said.
These are among the 10 biggest Nvidia news stories of 2025, which also include the company’s big push into enterprise data centers with the RTX Pro servers, the U.S. government’s approval of H200 GPU exports to China, and Huang’s success in dispelling the notion that the rise of efficient AI models would lower demand.
10. Nvidia Starts Big Enterprise AI Push With RTX Pro Servers
Nvidia launched a major push this year to bring GPU acceleration to enterprise data centers with the new RTX Pro servers coming from top OEMs.
In an interview with CRN, Nvidia Americas Channel Chief Craig Weinstein said the RTX Pro servers represent the “largest scale-out opportunity” the company has seen in the nearly 10 years that he’s been working there.
Weinstein said the company views this as a multibillion-dollar opportunity for the channel because the product line is aimed at the many enterprise data centers that have traditionally run on CPU-only servers and are now coming due for a refresh.
Revealed at Nvidia’s GTC 2025 event in March, the RTX Pro servers are air-cooled systems powered by x86-based CPUs and its new RTX Pro 6000 Blackwell Server Edition GPUs. Available in the 8U and more standard 2U form factors, these servers are designed by Nvidia’s OEM partners to fit in standard data centers.
OEMs selling RTX Pro servers include Dell Technologies, Hewlett Packard Enterprise, Lenovo, Cisco Systems, Supermicro, Asus and Gigabyte.
These servers and the underlying GPUs may not be nearly as powerful as Nvidia’s rack-scale offerings such as the Blackwell Ultra-based GB300 NVL72 platform that require liquid cooling and unprecedented amounts of electricity—which is only possible with marquee customers like Microsoft, OpenAI and Amazon that are willing to make massive investments in new data center infrastructure.
But the relatively lax energy and cooling requirements of the RTX Pro servers are what will make them appealing to a much broader constituency of enterprise customers that are keen on taking advantage of GPUs to power AI and other kinds of workloads that can benefit from accelerated computing, according to Weinstein.
“This exists inside the customer’s current data center. We’re going to have the opportunity to [put RTX Pro servers] inside the existing power footprint of an enterprise customer. That’s a powerful message today when the world is power-constrained,” he said.
9. Nvidia Teases 576-GPU Server Rack In Road Map Update
Nvidia provided a major road map update in March, saying that the company plans to release a rack-scale architecture for AI data centers that will connect 576 next-generation GPUs in the second half of 2027.
As the company announced last year, the company plans to follow up this year’s Blackwell Ultra with a brand-new GPU architecture in 2026 called Rubin, which will use HBM4 high-bandwidth memory for the first time. This will coincide with several other new chips, including a follow-up to Nvidia’s Arm-based Grace CPU called Vera.
At its GTC 2025 event, Huang (pictured) provided more details about Rubin, which he said would be part of the liquid-cooled Vera Rubin NVL144 platform that will debut in the second half of 2026 and connect 144 Rubin GPUs with Vera CPUs, which will sport 88 custom Arm cores, using its new, sixth-generation NVLink chip-to-chip interconnect.
The Vera Rubin NVL144 platform will have the ability to hit 3.6 exaflops of 4-bit floating-point (FP4) inference performance and 1.2 exaflops of 8-bit floating-point (FP8) training performance, which Nvidia said will make it 3.3 times faster than the new GB300 NVL72.
The platform will feature 13 TBps of HBM4 memory bandwidth and 75 TB of fast memory, a 60 percent increase from the GB300 NVL72. The NVLink 6 bandwidth will hit 260 TBps, double that of the GB300 NVL72. The ConnectX-9 SmartNIC will hit 28.8 TBps, also double.
Nvidia plans to follow up the Vera Rubin NVL144 with the liquid-cooled Rubin Ultra NVL576 in the second half of 2027. While it will keep the Vera CPU, the platform will come with a new GPU package called Rubin Ultra that will expand in size to four reticle-sized GPUs, featuring 1 GB of HBM4e memory and 100 petaflops of FP4 performance.
As the name implies, the Rubin Ultra NVL576 will connect 576 Rubin Ultra GPUs with Vera CPUs using a seventh generation of Nvidia’s NVLink. It will be capable of 15 exaflops of FP4 inference performance and 5 exaflops of FP8 training performance, which Nvidia said will make it 14 times faster than the GB300 NVL72 platform.
The platform will feature 4.6 PBps of HBM4e memory bandwidth and 375 TB of fast memory, eight times faster than the GB300 NVL72. The NVLink 7 bandwidth will run 12 times faster at 1.5 PBps while the ConnectX-9 SmartNIC will hit 115.2 TBps, eight times greater than the GB300 NVL72.
On top of providing those details, Huang disclosed that Nvidia plans to deliver a next-generation platform with a new GPU called Feynman, which will feature a new high-bandwidth memory format, in 2028. This platform will also feature Vera CPUs, a next-generation NVLink interconnect, the eighth-generation NVSwitch and ConnectX-10 SmartNICs while using 204 TBps Spectrum 7 Ethernet switches.
8. Nvidia Announces Investment In US Manufacturing With Partners
Nvidia said in April that it will build entire AI supercomputers in the United States for the first time thanks to investments it’s making with Taiwanese manufacturing partners TSMC, Foxconn and Wistron.
In its announcement, the AI infrastructure giant said that it has “commissioned more than a million square feet of manufacturing space to build and test Nvidia Blackwell chips in Arizona and AI supercomputers in Texas.”
Nvidia announced the move as President Trump continued to adjust the wide-ranging tariffs he unleashed on nearly 60 countries and regions in early April with the goal of fixing perceived trade imbalances with other countries and grow domestic manufacturing.
This new production capability will allow the company to “produce up to a half trillion dollars of AI infrastructure” in the U.S. within the next four years, according to Nvidia.
The company said it is also working with Amkor and SPIL to handle the chip packaging and testing needs of its AI supercomputer products, respectively.
The investments Nvidia is making in U.S. manufacturing for its chips and AI supercomputers is “expected to create hundreds of thousands of jobs and drive trillions of dollars in the economic security over the coming decades,” according to the company.
The company said it will use its “advanced AI, robotics and digital twin technologies to design and operate the U.S. facilities run by parts partners.”
7. Nvidia Becomes First Company To Hit $5T Market Cap
Nvidia this year became the first company with a market capitalization to reach $4 trillion and then $5 trillion—moves that were only a few months apart.
The AI infrastructure giant achieved the $4 trillion milestone on July 9 when its share price grew 2.4 percent to $164, according to Reuters. Its stock price had risen 18 percent by this point since the beginning of the year.
Nvidia then hit a $5 trillion market cap on Oct. 29 when its stock price grew more than 3 percent to $206 per share, CNBC reported that day.
As of Thursday, the company’s market cap was back down to $4.2 trillion.
Microsoft and Apple have seen their market caps reach $4 trillion, though only the latter was still above that threshold as of Thursday.
These stock movements have made Nvidia the world’s most valuable company this year.
The company exceeded $3 trillion in market cap for the first time in June of last year and weeks later surpassed Microsoft to become the world’s most valuable company for the first time. It hit the No. 1 spot at least two more times last year.
6. Competition Rachets Up From Rivals While A Few Stumble
Nvidia continued to face growing competition this year from various companies, though at least a few of them stumbled in their efforts to introduce new AI chips.
One of the strongest signs of competition came from AMD, whose CEO, Lisa Su (pictured), at an investor event in November said that the company sees a “very clear path” to gaining double-digit share in the data center AI market.
Su said AMD is “on a clear trajectory” to make tens of billions of dollars in annual revenue in 2027 from its Instinct data center GPU business, thanks in large part to its recently announced deal for OpenAI to deploy 6 megawatts of Instinct-based infrastructure.
Nvidia has also seen increased competition from Google, which is reportedly exploring the deployment of its TPUs—traditionally used for the company’s own infrastructure—inside data centers run by customers, including Meta.
Earlier this year, Google revealed its seventh-generation TPU, Ironwood, which it said it designed to improve performance and scalability for inferencing.
Amazon Web Services, too, is bringing on the heat with the launch of its new Trainium3 accelerator chip, which AWS CEO Matt Garman said “will be the best inference platform in the world” in a recent interview with CRN.
“[Trainium3] will be the most efficient, most effective, the best cost performance, the lowest latency and the best throughput,” he said.
Microsoft, on the other hand, has reportedly been facing challenges with its homegrown AI chip design efforts, according to a June report by The Information.
The report said Microsoft’s Maia 100 accelerator chip wasn’t designed for generative AI but rather image processing, resulting in the company only using it to train staff. Mass production for a next-generation AI chip code-named “Braga,” on the other hand, was delayed by at least half a year by Microsoft due to feature requests by OpenAI.
Nevertheless, Microsoft CTO Kevin Scott reportedly said in October that he envisions the company mainly relying on its own chips in data centers in the long term.
The biggest driver behind some of these custom accelerator chip efforts has been Broadcom, which is the design partner for Google’s TPUs and custom Meta chips. The company has also signed up to aid with custom chips for OpenAI and Anthropic.
While Intel recently retooled its AI strategy with a new annual release cadence for data center GPUs, the company faced a setback in November when the leader of its AI strategy and road map, Sachin Katti, resigned to take a job at OpenAI.
In a memo to employees announcing Katti’s exit, Intel CEO Lip-Bu Tan said he will assume leadership of the AI Group and Intel Advanced Technologies Group that were previously led by Katti, explaining that his decision was motivated by recent changes felt by the teams.
Roughly a month before Katti left Intel, the company revealed a 160-GB, energy-efficient data center GPU that is part of a new annual GPU release cadence to deliver on its strategy of providing open systems and software architecture for AI systems.
Nvidia is facing competition from startups as well, including Axelera AI, d-Matrix, Encharge and Tenstorrent. One of those startups, Untether AI, announced in early June that it was shutting down after AMD acquired its engineering team.
5. Nvidia Gets OK For Selling H200 GPU Into China
Nvidia got approval from the U.S. government in December to sell its older H200 GPU to customers in China after long-simmering drama that saw the Trump administration waffle on such matters for several months over national security concerns.
In a Dec. 8 post to the Truth Social website, President Trump wrote that the U.S. will allow the sale of H200 products to “approved customers in China, and other Countries, under conditions that allow for continued strong National Security.”
As part of the approval, Trump said that the U.S. government will take 25 percent of revenue Nvidia makes from the sale of H200 products to Chinese customers. He added that this will also apply to Intel, AMD and other U.S. semiconductor firms.
“We will protect National Security, create American Jobs, and keep America's lead in AI,” he wrote in his Dec. 8 Truth Social post, adding that the Blackwell and next-generation Rubin GPUs are not part of this deal.
At issue was the debate over whether the U.S. should allow domestic semiconductor companies to sell AI accelerator chips into China. Opponents fear that doing so could fuel China’s technological and military capabilities to the detriment of the U.S. while proponents argue that the country represents a significant market for U.S. firms.
Supporters have also said that any restrictions on AI chip sales into China could push the country to accelerate its own chip design and manufacturing capabilities.
Back in March, the Trump administration enacted export controls on the sale of Nvidia’s H20 GPU and comparable offerings from rivals into China. This GPU was designed as a less-powerful version of the H200 for Chinese customers to comply with export restrictions set by President Biden’s administration in 2022.
At the time, a Commerce Department spokesperson said the move was based on “the president’s directive to safeguard our national and economic security.”
During Nvidia’s first-quarter earnings call in April, Huang decried the new export restriction on the H20 into China but said he trusted the president’s “vision” and praised the U.S. leader for boosting domestic manufacturing.
As a result of the export restriction, the company revealed back then that it had incurred a $4.5 billion charge in the first quarter, which ended April 27, “associated with excess inventory and purchase obligations as the demand for H20 diminished.”
The company added that H20 sales in the first quarter were $4.5 billion and that it was “unable to ship an additional $2.5 billion of H20 revenue” for the period.
The export restriction also resulted in a loss of $8 billion in H20 sales for the second quarter, according to Nvidia.
This led to Huang spend a good chunk of the year lobbying the Trump administration and the American public to reverse the export restriction and approve the sales of future chips.
“The platform that wins China is positioned to lead globally today. However, the $50 billion China market is effectively closed to us,” Huang said during the April earnings call.
What the Trump administration ended up approving for sale to China is the H200 that launched as Nvidia’s fastest data center GPU early last year. This GPU was succeeded by the Blackwell-based B200 and GB200 products that debuted at the end of last year. Nvidia started shipping their successors—the B300 and GB300—later in the year.
While Nvidia welcomed the approval, there is a question of whether Chinese companies will use Nvidia GPUs—if they’re allowed to at all, with China’s government having reportedly blocked local companies from buying the H20 when it was still in contention.
However, Trump said in his Dec. 8 post that Chinese President Xi Jinping had “responded positively” to the U.S. government’s approval of H200 sales. And analysts told CNBC that Chinese customers may have use for such chips in the short term as the country focuses on building up its own AI chip design and manufacturing capabilities.
4. Huang Counters DeepSeek Concerns With New Growth Narrative
Huang used Nvidia’s GTC 2025 event in March to counter the narrative that the rise of efficient reasoning models like DeepSeek-R1 will undercut demand for its GPUs and associated componentry.
“The amount of computation we need at this point as a result of agentic AI, as a result of reasoning, is easily 100 times more than we thought we needed this time last year,” Huang said during his keynote at GTC 2025.
Huang made the assertion after DeepSeek, the Chinese company behind its eponymous R1 model, claimed in late January that it spent significantly less money than Western competitors such as OpenAI and Anthropic to develop its model. This fueled concerns that AI model developers would require fewer GPUs to train and run models.
To the contrary, Huang said, reasoning models like DeepSeek-R1 and the agentic AI workloads they power will create a need for more powerful GPUs in greater quantities. That’s because of how reasoning models have significantly increased the number of tokens—or words and other kinds of characters—used for queries as well as answers when compared to traditional large language models.
To that end, Huang pointed to Nvidia’s upcoming GB300 NVL72 rack-scale platform powered by its new Blackwell Ultra GPU as well as more powerful computing platforms coming out over the next two years as necessary for keeping up with the computational demands of reasoning models.
This point was outlined by one of Huang’s top lieutenants, Ian Buck, in a briefing with journalists the day before his keynote.
“While DeepSeek can be served with upwards of 1 million tokens per dollar, typically they’ll generate up to 10,000 or more tokens to come up with that answer. This new world of reasoning requires new software, new hardware to help accelerate and advance AI,” said Buck, whose title is vice president of hyperscale and high-performance computing.
With data centers running DeepSeek and other kinds of AI models representing what Buck called a $1 trillion opportunity, Nvidia is focusing on how its GPUs, systems and software can help AI application providers make more money, with Buck saying that Blackwell Ultra alone can enable a 50-fold increase in “data center revenue opportunity.”
The 50-fold increase is based on the performance improvement Buck said Nvidia can provide for the 671-billion-parameter DeepSeek-R1 reasoning model with the new GB300 NVL72 rack-scale platform over an HGX H100-based data center at the same power level.
“The combination of total token volume [and] dollar per token expands from Hopper to Blackwell by 50X by providing a higher-value service, which offers a premium experience and a different price point in the market,” he said.
“As we reduce the cost of serving these models, they can serve more with the same infrastructure and increase total volume at the same time,” Buck added.
In November, Nvidia indicated that demand for its GPU remained strong, with Huang revealing that the company is expected to make $500 billion from its Blackwell and Rubin platforms between the beginning of this year and the end of next year.
3. Nvidia Goes On Investment Spree With Intel, OpenAI And Others
Nvidia went on a major investment spree this year, committing $100 billion to OpenAI and announcing plans to pour billions of dollars into Anthropic, Intel, Nokia and Synopsys among dozens of other companies it infused with capital.
Announced in September, Nvidia said it plans to dole out parts of the $100 billion commitment for OpenAI as the ChatGPT creator deploys 10 gigawatts of new AI data centers using Nvidia systems, which will represent “millions of GPUs.” The first gigawatt of systems will land in the second half of 2026 using Nvidia’s Vera Rubin platform.
That same month Nvidia said it will invest $5 billion in Intel common stock as part of a deal that will see the two companies jointly develop multiple generations of PC and data center products.
Then in October, Nvidia announced that it plans to invest $1 billion into Nokia as part of a major push by the company to expand in the telecom industry with a new AI platform.
The next month the AI infrastructure giant announced a deal alongside Microsoft for the two companies to invest $10 billion and $5 billion, respectively, in OpenAI rival Anthropic, which, in turn, committed to spending $30 billion on Microsoft Azure compute capacity and using up to 1 gigawatt of additional capacity based on Nvidia’s rack-scale AI platforms.
Nvidia closed the year in early December by announcing a $2 billion investment in Synopsys common stack as part of an “expanded, strategic partnership” that will see the engineering software giant significantly boost application performance with GPUs.
The company admitted in a November regulatory filing that some of the investment deals, like with OpenAI and Anthropic, may not “be completed on expected terms, if at all.”
In addition to these multibillion-dollar deals, Nvidia used its strong cash reserves elsewhere to fund a myriad of other companies, including AI cloud providers Crusoe, Nscale and Lambda as well as AI model providers Cohere, Black Forest Labs and Mistral AI plus AI search provider Perplexity and AI safety developer Safe Superintelligence.
2. Huang Becomes Strong Ally To Trump
Huang became a strong ally to President Trump this year as he pushed the White House to allow sales of Nvidia’s GPUs into China, announced plans to invest in U.S. manufacturing for its products and supported the country’s ongoing AI infrastructure build-out.
When the Nvidia CEO wasn’t seen meeting with Trump or his associates in Washington, D.C., and elsewhere, Huang used his position as the leader of the dominant provider of AI infrastructure at times to praise the president publicly and ask Trump to reconsider certain positions, suggest as an export restriction on Nvidia GPUs going into China.
With the question of selling GPUs to customers into China, billions of dollars have been at stake for Nvidia. The company reported in April that a U.S. restriction on exports of its H20 GPU to Chinese customers resulted in a $4.5 billion charge in the first quarter and a loss of $8 billion in H20 sales in the second quarter.
Even when Nvidia reported this negative impact, Huang praised Trump in response to an analyst questioning whether Trump’s desire to have the United States win in the AI infrastructure market will allow the company to ship an alternative to the H20 to China.
“The president has a plan. He has a vision. And I trust him,” he said at the time.
Huang’s lobbying on this matter eventually paid off, with Trump announcing in early December that the U.S. will allow the sale of the more-powerful H200 “approved customers in China, and other Countries, under conditions that allow for continued strong National Security.” In exchange, Nvidia will give 15 percent of revenue from H200 sales into China to the U.S. government. This deal also applies to AMD, Intel and other U.S. rivals.
But Trump’s green light for Nvidia to sell GPUs into China was quickly scrutinized by critics, including Trump’s deputy national security adviser from his first administration. The adviser, Matt Pottinger, argued in a New York Times opinion piece with Ben Buchanan, who advised former President Biden on AI matters, that the H200 “will be an even greater boon to China’s military and AI development.”
Huang and his company have influenced Trump on other geopolitical matters, with The New York Times reporting in October that Nvidia’s chips have “given the president a powerful bargaining lever with a variety of countries,” including Saudi Arabia and Britain.
“They have even played a role […] in the administration’s peace mediations between several nations,” the newspaper said at the time.
Huang has taken other opportunities to shout out Trump, such as when he praised the president in his keynote at Nvidia’s GTC DC event in October for moves by the White House to support the growth of AI data centers.
“And this is another area where our administration, President Trump, deserves enormous credit: his pro-energy initiative, his recognition that this industry needs energy to grow, it needs energy to advance, and we need energy to win. His recognition of that and putting the weight of the nation behind pro-energy growth completely changed the game,” he said.
“If this didn't happen, we could have been in a bad situation, and I want to thank President Trump for that,” Huang added in his keynote.
The Nvidia CEO also used his influence to help convince Trump against sending federal troops into San Francisco, according to an October Truth Social post by the president.
1. Huang Pushes Back Against Concerns Of An AI Bubble
Huang took on the popular question of whether Nvidia is at the center of an AI bubble in late November and said he sees “something very different” that will “contribute to infrastructure growth in the coming years.”
In arguing against comparisons to the dot-com bubble that led to the stock market crash in 2000, Huang said on his company’s third-quarter earnings call that the AI infrastructure giant is benefiting from “three massive platform shifts” happening at once.
The first shift, according to Huang, is the ongoing transition from general-purpose computing, made possible by CPUs, to accelerated computing enabled by GPUs and other accelerator chips.
“Secondly, AI has also reached a tipping point and is transforming existing applications while enabling entirely new ones for existing applications,” he added. “Generative AI is replacing classical machine learning in search ranking, recommender systems, ad targeting, click-through prediction [and] content moderation.”
To Huang, the third shift is focused on agentic AI and physical AI, which he said “will be revolutionary, giving rise to new applications, companies, products and services.”
“The fastest-growing companies in the world today—OpenAI, Anthropic, xAI, Google, Cursor, Lovable, Replit, Cognition AI, OpenEvidence, Abridge, Tesla—are pioneering agentic AI,” he said.
With Nvidia now into its second year of its annual release cadence for data center GPUs and related products, Huang said Nvidia’s impact on the economy will grow over time.
This is thanks to the “co-design” work it does across its hardware and software portfolio, “across the frameworks and models, across the entire data center, even power and cooling optimized across the entire supply chain in our ecosystem,” according to the CEO.
“And so [with] each generation, our economic contribution will be greater, our value delivered will be greater, but the most important thing is our energy efficiency—per watt—is going to be extraordinary every single generation,” he said.
Huang made the remarks after his company reported that third-quarter revenue grew to a record $57 billion, marking a 62 percent year-over-year increase that was largely driven by sales of the company’s Blackwell and Blackwell Ultra GPU platforms.
On the call, Nvidia CFO Colette Kress reiterated recent remarks by Huang that the company has “visibility” to $500 billion in revenue from the beginning of this year to the end of next year for its Blackwell and next-generation Rubin platforms. The company expects the AI infrastructure market to reach up to $4 trillion by the end of the decade.