Intel CEO Gelsinger: ‘We’re Going To Build AI Into Every Product We Build’
‘We firmly believe in this idea of democratizing AI, opening the software stack and creating and participating with this broad industry ecosystem that’s emerging. It’s a great opportunity and one that Intel is well-positioned to participate in,’ Intel CEO Pat Gelsinger said on the company’s earnings call.
Intel CEO Pat Gelsinger told listeners on the chipmaker’s latest quarterly earnings call that AI will become part of every business and an inflection point for the PC market, with his company readying to meet customer demand.
“We see AI as a workload, not as a market, which will affect every aspect of the business—whether it’s client, whether it’s edge, whether it’s standard data center, on-premises enterprise or cloud,” Gelsinger said Tuesday while reporting results for the Santa Clara, Calif.-based chipmaker’s second fiscal quarter, which ended July 1.
“We’re going to build AI into every product that we build—whether it’s a client, whether it’s an edge platform for retail and manufacturing and industrial use cases. Whether it’s an enterprise data center. … We firmly believe in this idea of democratizing AI, opening the software stack and creating and participating with this broad industry ecosystem that’s emerging. It’s a great opportunity and one that Intel is well-positioned to participate in.”
Intel Q2 Results
Gelsinger has placed his bets on Meteor Lake, a next-generation processor planned for the fall, as a way for Intel to own the coming AI PC moment. He wants Intel to have a repeat of how the vendor’s Centrino wireless computer network adapters helped the spread of Wi-Fi in the 2000s.
“We do see Meteor Lake ushering in the AI PC generation where you have tens of watts responding in a second or two,” he said. “And then AI is going to be in every hearing aid in the future—including mine—where it’s 10 microwatts and instantaneous. … AI drives workloads across the full spectrum of applications.”
Ai will be “infused in everything,” he told listeners.
“There’s going to be AI chips for the edge. AI chips for the communications infrastructure. AI chips for sensing devices. For automotive devices,” he said. “And we see opportunities for us, both as a product provider and as a foundry and technology provider across that spectrum.”
The CEO’s comments and a profitable second fiscal quarter appeared to excite investors Tuesday, with its stock price growing 8 percent after hours to about $37 a share.
Here’s what else Gelsinger had to say on the call.
Intel’s AI Opportunity
In Q2, we began to see real benefits from our accelerating AI opportunity. We believe we are in a unique position to drive the best possible TCO [total cost of ownership] for our customers at every node on the AI continuum.
Our strategy is to democratize AI—scaling it and making it ubiquitous across the full continuum of workloads and usage models.
We are championing an open ecosystem with a full suite of silicon and software IP to drive AI from cloud to enterprise, network, edge and client—across data prep, training and inference in both discrete and integrated solutions. … The surging demand for AI products and services is expanding the pipeline of business engagements for our accelerator products, which includes our Gaudi, Flex and Max product lines.
Our pipeline of opportunities through 2024 is rapidly increasing. … The accelerator pipeline is now well over $1 billion and growing rapidly, about 6X this past quarter. That’s led by, but not exclusively, Gaudi.
That also includes the Max and Flex product lines as well. But the lion’s share of that is Gaudi. Gaudi 2 is shipping volume product today. Gaudi 3 will be the volume product for next year. And then Falcon Shores in ’25. And we’re already working on Falcon Shores 2 for ’26.
So we have a simplified road map as we bring together our GPU and our accelerators into a single offering. … But the progress that we’re making with Gaudi 2, it becomes more generalized with Gaudi 3, the software stack, our one API approach that we’re taking will give customers confidence that they have forward compatibility into Gaudi 3 and Falcon Shores that will just be broadening the flexibility of that software stack.
We’re adding FP8 [8-bit floating point]. We just added PyTorch 2 support. So every step along the way it gets better and broader use cases.
More language models are being supported. More programmability is being supported in the software stack. And we’re building that full solution set as we deliver on the best of GPU and the best of matrix acceleration in the Falcon Shores timeline.
But every step along the way, it just gets better. Every software release gets better. Every hardware release gets better along the way to cover more of the overall accelerator marketplace.
Five Nodes Progress
We remain on track to five nodes in four years and to regain transistor performance and power performance leadership by 2025. Looking specifically at each node, Intel 7 is done. And with the second-half launch of Meteor Lake, Intel 4, our first EUV [extreme ultraviolet] node is essentially complete with production ramping.
For the remaining three nodes … Intel 3 met defect density and performance milestones in Q2, released PDK [process design kit] 1.1 and is on track for overall yield and performance targets.
We will launch Sierra Forest in the first half of ’24, with Granite Rapids following shortly thereafter—our lead vehicles for Intel 3.
On Intel 20A, our first node using both RibbonFET and PowerVia, Arrow Lake, a volume client product, is currently running its first stepping in the fab. … On Intel 18A, we continue to run internal and external test chips and remain on track to being manufacturing ready in the second half of 2024.
Attacking CPU Inventory
Our client business exceeded expectations and gained share yet again in Q2 as the group executed well, seeing a modest recovery in the consumer and education segments, as well as strengths in premium segments where we have leadership performance.
We have worked closely with our customers to manage clients’ CPU inventory down to healthy levels. As we continue to execute against our strategic initiatives, we see a sustained recovery in the second half of the year as inventory has normalized.
Importantly, we see the AI PC as a critical inflection point for the PC market over the coming years that will rival the importance of [Intel] Centrino and Wi-Fi in the early 2000s. And we believe that Intel is very well positioned to capitalize on the emerging growth opportunity.
In addition, we remain positive on the long-term outlook for PCs as household density is stable to increasing across most regions, and usage remains above pre-pandemic levels. … We’ve worked through inventory in Q4, Q1 and some in Q2.
We now see the OEMs and the channel at healthy inventory levels. We continue to see solid demand signals for the client business from our OEMs. And even some of the end-of-quarter and early quarter sale-through are clear indicators of good strength in that business. And obviously we combine that with gaining share again in Q2.
So we come into the second half of the year with good momentum and a very strong product line. So we feel quite good about the client business outlook. …
Meteor Lake To Meet AI PC Moment
Building on strong demand for our 13th gen Intel processor family, Meteor Lake is ramping well in anticipation of a Q3 PRQ [production release qualification] and will maintain and extend our performance leadership and share gains over the last four quarters.
Meteor Lake will be a key inflection point in our client processor road map as the first PC platform built on Intel 4, our first EUV node, and the first client chiplet design enabled by Foveros advanced 3-D packaging technology, delivering improved power, efficiency and graphics performance.
Meteor Lake will also feature a dedicated AI engine—Intel AI Boost. With AI Boost, our integrated neural VPU, enabling dedicated, low-power compute for AI workloads, we will bring AI use cases to life through key experiences people will want and need for hybrid work, productivity, sensing, security and creator capabilities. … We do see Meteor Lake ushering in the AI PC generation where you have tens of watts responding in a second or two.
And then AI is going to be in every hearing aid in the future—including mine—where it’s 10 microwatts and instantaneous. … AI drives workloads across the full spectrum of applications.
And for that, we’re going to build AI into every product that we build—whether it’s a client, whether it’s an edge platform for retail and manufacturing and industrial use cases.
Whether it’s an enterprise data center where they’re not going to stand up a dedicated 10-megawatt farm, but they’re not going to move their private data off-premises and use foundational models that are available in open source as well as in the big cloud and training environments as well.
We firmly believe in this idea of democratizing AI, opening the software stack and creating and participating with this broad industry ecosystem that’s emerging. It’s a great opportunity and one that Intel is well-positioned to participate in.
We’ve seen that the AI TAM [total addressable market] is part of the semiconductor TAM. We’ve always described this $1 trillion semiconductor opportunity. And AI being one of those superpowers, as I call it, of driving it. But it’s not the only one. And one that we’re going to participate in broadly across our portfolio.
1 Millionth Xeon Unit To Ship
In the data center, our fourth-gen Xeon scalable processor is showing strong customer demand despite the mixed overall market environment.
I am pleased to say that we are poised to ship our 1 millionth fourth-gen Xeon unit in the coming days. … We also saw great progress with fourth-gen’s AI acceleration capabilities, and we now estimate more than 25 percent of Xeon data center shipments are targeted for AI workloads.
Also in Q2, we saw third-party validation from MLCommons when they published ML [machine learning] training performance benchmark data showing that fourth-gen Xeon and Habana Gaudi 2 are two strong, open alternatives in the AI market that compete on both performance and price versus the competition.
End-to-end AI-infused applications like DeepMind’s AlphaFold and algorithm areas such as graph neural networks show our fourth-gen Xeon outperforming other alternatives, including the best published GPU results. … Our data center CPU road map continues to get stronger and remains on or incrementally ahead of schedule with Emerald Rapids, our fifth-gen Xeon scalable, set to launch in Q4 of ’23.
Sierra Forest, our lead vehicle for Intel 3, will launch in the first half of ’24. Granite Rapids will follow shortly thereafter. For both Sierra Forest and Grand Rapids, volume validation with customers is progressing ahead of schedule.
Multiple Sierra Forest customers have powered on their boards. And silicon is hitting all power and performance targets.
Clearwater Forest, the follow-on to Sierra Forest, will come to market in 2025 and be manufactured on Intel 18A.
Server Softness Persists
While we perform ahead of expectations, the Q2 consumption TAM for servers remains soft with persistent weakness across all segments, but particularly in the enterprise and rest of the world where the recovery is taking longer than expected across the entire industry.
We see the server CPU inventory digestion persisting in the second half, additionally impacted by the near-term wallet-share focus on AI accelerators rather than general-purpose compute in the cloud.
We expect Q3 server CPUs to modestly decline sequentially before recovering in Q4. … There are great analogies here that from history we point to. Cases like virtualization was going to destroy the CPU TAM and then ended up driving new workloads.
If you think about the [Nvidia] DGX platform, the leading-edge AI platform, it includes CPUs. Why? Head nodes, data processing, data prep dominate certain portions of the workload.
Demand trends are relatively stronger across our broad-based markets like industrial, auto and infrastructure. Although, as anticipated, NEX did see a Q2 inventory correction, which we expect to continue into Q3.
In contrast, PSG [Programmable Solutions Group], IFS [Intel Foundry Services] and Mobileye continue on a solid growth trajectory. And we see the collection of these businesses, in total, growing year on year in calendar year ’23.
Much better than third-party expectations for a mid-single-digit decline in the semiconductor market, excluding memory. … We have now PRQ’d 11 of the 15 new products we expected to bring to market in calendar year ’23. … In addition to executing on our process and product road maps during the quarter, we remain on track to achieve our goal of reducing costs by $3 billion in 2023 and $8 billion to $10 billion exiting 2025. … We have already identified numerous gains and efficiency including factory loading, tests and sort time reduction, packaging cost improvements, litho field utilization improvements … expedites, and many more…. [Expected weakness in the third quarter is due in part] to data center digestion for the cloud guys.
A bit of enterprise weakness. Some of that is more inventory. And the China market … hasn’t come back as strongly as people would have expected overall. And then the last factor was … the pressure from accelerator spend being stronger. … That said, our overall position is strengthening.
And we’re seeing our products improve. We’re seeing the benefits of the AI capabilities in our gen-four and beyond products improving.
We’ll also start to see some of the use cases like graph neural networks … AlphaFold, showing best results on CPUs as well, which is increasingly gaining momentum in the industry as people look for different aspects of data preparation, data processing, different innovations in AI.
So all of that taken together, we feel optimistic about the long-term outcome opportunities that we have in data center. … We see a lot of long- term optimism even as near term we’re working through some of the challenging environments of the market not being as strong as we would have hoped. …
AI Effects On Data Center Business
With regard to the data center … I’ll just say we executed well. Winning designs, fighting hard in the market, regaining our momentum, good execution. … So overall, it’s feeling good.
Road map’s in very good shape, so we’re feeling very good about the future outlook of the business as well. … We do think that the next quarter, at least, will show some softness.
There’s some inventory burn that we’re still working through. We do see that big cloud customers in particular have put a lot of energy into building out their high-end AI training environments. And that is putting more of their budgets focused or prioritized into the AI portion of their buildout.
That said, we do think this is a near-term surge that we expect will balance over time. We see AI as a workload, not as a market, which will affect every aspect of the business—whether it’s client, whether it’s edge, whether it’s standard data center, on-premises enterprise or cloud. … We see our accelerator products portfolio is well-positioned to gain share in 2024 and beyond.
‘Raft’ Of AI Enablement Coming
Today, you’re starting to see that people are going to the cloud and goofing around with ChatGPT, writing a research paper. And that’s, like, super cool. And kids are, of course, simplifying their homework assignments that way.
But you’re not going to do that for every client becoming AI-enabled. It must be done on the client for that to occur.
All have the new effects—real-time language translation in your Zoom calls. Real-time transcription. Automation. Inferencing. Relevance portraying. Generated content in gaming environments. Real-time creator environments. … New productivity tools. Being able to do local legal brief generations on clients.
One after the other, right across every aspect of consumer, developer and enterprise efficiency use cases, we see that there’s going to be a raft of AI enablement.
And those will be client-centered. Those will also be at the edge. You can’t round trip to the cloud. You don’t have the latency, the bandwidth or the cost structure to round trip, let’s say, inferencing in a local convenience store to the cloud.
It will all happen at the edge and at the client. So with that in mind, we do see this idea of bringing AI directly into the client.
And Meteor Lake, which we bring into the market in the second half of the year, is the first major client product that includes native AI capabilities—the neural engine that we’ve talked about.
And this will be a volume delivery that we will have. And we expect that Intel is the volume leader for the client footprint, the one that’s going to truly democratize AI at the client and at the edge.
And we do believe that this will become a driver of the TAM. Because people will say, ‘Oh, I want those new use cases. They make me more efficient and more capable just like Centrino made me more efficient because I didn’t have to plug into the wire. Now I don’t have to go to the cloud to get these use cases. I’m gonna have them locally on my PC in real time and cost-effective.’
We see this as a true AI PC moment that begins with Meteor Lake in the fall of this year.
Custom Silicon Opportunity In AI
I have multiple ways to play in this market. Obviously, one of those is foundry customers. And we have a good pipeline of foundry customers for 18A foundry opportunities.
And several of those opportunities that we’re investigating are … people looking to do their own unique versions of their AI accelerator components. And we’re engaging with a number of those, but some of those are going to be variations of Intel standard products.
And this is where the IDM [integrated device manufacturing] 2.0 strength really comes into play—where they could be using some of our silicon combining it with some of their silicon designs.
And given our advanced packaging strength that gives us another way to be participating in those areas. And, of course, that reinforces that some of the near-term opportunities will just be packaging, where they already have designed with one of the other foundries, but we’re going to be able to augment their capacity opportunities with immediately being able to engage with packaging opportunities. And we’re seeing pipeline of those opportunities. … We see AI being infused in everything.
And there’s going to be AI chips for the edge. AI chips for the communications infrastructure. AI chips for sensing devices. For automotive devices.
And we see opportunities for us, both as a product provider and as a foundry and technology provider across that spectrum. And that’s part of the unique positioning that IDM 2.0 gives us for the future.