Broadcom CEO Explains $100B Chip Vision, Nvidia, VMware And Supply Chain Success
Broadcom’s Hock Tan explains his plan to generate over $100 billion in chip sales by 2027, how agentic AI boosts VMware revenue, how his company is ‘gaining’ networking share and what large customers Google, OpenAI and Anthropic are buying.
CEO Hock Tan is bullish that Broadcom will generate over $100 billion in chip sales by 2027, that agentic AI will boost its VMware business, and how large customers like Google, Anthropic and OpenAI are elevating Broadcom to the next level.
“We have a line of sight to achieve AI revenue from chips, just chips, in excess of $100 billion in 2027,” Tan said during Broadcom’s earnings report for first-quarter 2026 Wednesday.
“We have also secured the supply chain required to achieve this,” Tan said. “We have fully secured capacity of these components for 2026 through 2028.”
In terms of VMware, sales grew 13 percent year over year, with total contract value booked exceeding $9.2 billion.
“We are confident that the growth in generative and agentic AI will create the need for more VMware, not less,” Tan said.
[Related: VMware Head On ‘Huge VCF Tailwind’ From Memory Shortages, Server Prices Issues]
Tan also spoke at length about strong demand for its chips and technology from its six largest customers, including Google, Meta, Anthropic and OpenAI, as well as competition from Nvidia.
“You need the best chip that is around because you are competing against other LLM players. And most of all, you are also competing against Nvidia, who is by no means letting down their guard,” Tan said. “They are producing better and better chips with every passing generation.”
Broadcom Q1 2026 Earnings Results And Q2 Guidance
Before jumping into Tan’s boldest remarks, here’s a look at Broadcom’s first fiscal quarter 2026 financial earnings results, which ended Feb. 1, 2026.
Broadcom generated $19.3 billion in revenue for first-quarter 2026, representing a 29 percent increase year over year.
From Broadcom’s $19.3 billion in total sales, $12.5 billion came from its Semiconductor Solutions business, while $6.8 billion was generated from its Infrastructure Software Group.
AI revenue increased 106 percent year over year to $8.4 billion in first-quarter 2026. Broadcom is projecting AI revenue to reach $10.7 billion during its current second fiscal quarter of 2026.
Net income was $7.3 billion, up 34 percent year over year.
Broadcom’s financial guidance for second-quarter 2026 is $22 billion, which represents a 47 percent increase year over year.
Here are the boldest comments from CEO Tan during his company’s first-quarter 2026 earnings report Wednesday.
Hock Tan On $100 Billion In Chip Revenue In 2027 (Part 1)
We have line of sight to achieve AI revenue from chips, just chips, in excess of $100 billion in 2027. We have also secured the supply chain required to achieve this.
I am focusing on the fact that these are pretty much all based on chips—whether they are XPUs, whether they are switch chips, DSPs—these are silicon content we are talking about.
[Our customers], particularly in XPUs, they need to do the computing and then that is needed to optimize and run the training and inference on the workloads they produce out of the LLM. That technology comes from different dimensions. You need the best silicon design team around.
You need cutting-edge, really cutting-edge SerDes, very advanced packaging. And just as much, you need to understand how to network clusters of them together.
We have been doing this for more than 20 years in silicon and, in this particular space today, in generative AI. If you are trying to, as an LLM player, do your own chip, you cannot afford to have a chip that is just good enough.
You need the best chip that is around because you are competing against other LLM players. And most of all, you are also competing against Nvidia, who is by no means letting down their guard. They are producing better and better chips with every passing generation.
Hock Tan On $100 Billion In Chip Revenue In 2027 (Part 2)
So you have to, as an LLM trying to establish your platform in the world, create chips that are better than and competitive with not just Nvidia but all the other LLM platform players that you are competing against.
And for that, you really need the best partner in silicon with the best technology, IP, and execution around. And very modestly, I would say we are, by far, way out ahead there.
One thing I add in there that is particularly unique to us: When you create the silicon, you really have to get it up and running in high volume in production very quickly—time to market.
We are very, very experienced in doing that.
Anybody can design a chip in a lab that works well. Can you produce 100,000 of those chips quickly? And at yields that you can afford?
We do not see too many players in the world that can do that.
Broadcom’s Huge Customer Demand From Google, Anthropic, OpenAI And Meta
The realm of custom AI accelerators across all our five customers is progressing very well.
For Google, we continue our trajectory of growth in 2026 with strong demand for the seventh-generation iNode TPU. In 2027 and beyond, we expect to see even stronger demand from next generations of TPU.
For Anthropic, we are off to a very good start in 2026 for 1 gigawatt of TPU compute. And for 2027, this demand is expected to surge in excess of 3 gigawatts of compute. Our XPU franchise extends beyond TPUs.
Now contrary to recent analyst reports, Meta’s custom accelerator MTIA road map is alive and well. We are shipping now. In fact, for the next-generation XPUs, we will scale to multiple gigawatts in 2027 and beyond.
Rounding off for customers four and five, we see strong shipments this year, which we expect to more than double in 2027.
We also now have a sixth customer. We expect OpenAI to be deploying in volume their first-generation XPU in 2027 at over 1 gigawatt of compute capacity.
Let me take a second to emphasize our collaboration with these six customers to develop AI XPUs is deep, strategic, and multiyear.
How Broadcom’s Partnership Works With Six Customers
We bring to the partnerships with each of them unmatched technology in service, silicon design, process technology, advanced packaging and networking to enable each of these customers to achieve optimal performance for their differentiated LLM workloads.
We have the track record to deliver these XPUs in high volumes at an accelerated time to market with very high yields.
And beyond technology, we provide multiyear supply agreements as our customers scale up deployment of their compute infrastructure.
Our ability to assure supply in these times of constrained capacity in leading-edge wafers, in high-bandwidth memory and substrates ensures the durability of our partnerships.
And we have fully secured capacity of these components for 2026 through 2028.
VMware Revenue Grew 13 Percent Year On Year
Our Q1 infrastructure software revenue of $6.8 billion was in line with our guidance and up 1 percent year on year. We forecast infrastructure software revenue for Q2 to be approximately $7.2 billion, up 9 percent year on year.
VMware revenue grew 13 percent year on year.
[VMware] bookings continued to be strong, and total contract value booked in Q1 exceeded $9.2 billion, sustaining an annual recurring revenue growth of 19 percent year on year.
Let me reinforce that this growth in our software business reflects our focus and investments in foundational infrastructure. And our infrastructure software is not disrupted by AI.
In fact, VMware Cloud Foundation [VCF] is the essential software layer in data centers integrating CPUs, GPUs, storage and networking into a common high-performance private cloud environment.
As the permanent abstraction layer between AI software and physical silicon, VCF cannot be disintermediated or replaced.
It allows enterprises, in fact, to scale complex generative AI workloads effectively, with agility that hardware alone cannot provide.
We are confident that the growth in generative and agentic AI will create the need for more VMware, not less.
How ‘We Are Clearly Gaining Share In Networking’
Consistent now with the strong outlook for our XPUs, demand for AI networking is accelerating. Q1 AI networking revenue grew 60 percent year on year and represented one- third of total AI revenue.
In Q2, we project AI networking to accelerate a lot more and grow to 40 percent of total AI revenue. We are clearly gaining share in networking.
Let me explain. In scale out, our first-to-market Tomahawk 6 switch at 100 terabits per second, as well as our 200G SerDes, are capturing demand from hyperscalers.
Whether they use XPUs or GPUs this year, this lead will extend in 2027 with our next- generation Tomahawk 7 featuring double the performance.
Meanwhile, in scale up, as cluster sizes at our customers expand, we are uniquely positioned to enable these customers to stay on direct-attached copper through our 200G SerDes.
As we next step up to 400G SerDes in 2028, our XPU customers will likely continue to stay on direct-attached copper. And this is a huge advantage as the alternative of going to optical is more expensive and requires significantly more power.