Pat Gelsinger: Intel Will Be ‘More Ecosystem-Friendly’ Than Nvidia

In his exclusive interview with CRN, Intel CEO Pat Gelsinger talks about how Intel’s recent reorganization will give more attention to Intel’s “under-focused” graphics and network businesses, how Intel plans to win market share from Nvidia and why Intel is looking at building new paid software services and purpose-built systems in the future.

Contesting An ‘Uncontested’ Nvidia

Intel is making a major, unprecedented push into the graphics space that is putting the chipmaker in more direct competition with Nvidia than ever before, and CEO Pat Gelsinger said one way the company will stand out is by being “much more ecosystem-friendly.”

In his exclusive interview for CRN’s October cover story, Gelsinger framed this support for ecosystems around Intel’s growing portfolio of discrete GPU and AI accelerator products in a few ways.

[Related: How AMD CEO Lisa Su Plans To Keep The Pressure On Intel, Nvidia]

For one, he said, he doesn’t want to compete with OEM and channel partners by selling purpose-built appliances in the same way that Nvidia does with its DGX lineup of systems for AI, though he didn’t dismiss the idea of building appliances to serve as a blueprint for the wider ecosystem.

“I may need to do some appliances early on to help kickstart the industry, but I‘m going to do it in a way that even when I’m building appliances, it’s sort of like, yeah, yeah, go take my appliance but then get me out of the appliance business,” he said.

Nvidia declined to comment.

Gelsinger also sees Intel being more ecosystem-friendly than Nvidia when it comes to the software that support the underlying components. For Intel, this is centered around oneAPI, a set of toolkits that lets developers use a single programming model for different types of architectures. This includes Intel’s silicon products like CPUs, GPUs and FPGAs as well as products from competitors, like Nvidia’s GPUs.

“Nvidia has become too proprietary, and that‘s widely seen in the industry, and so we’re going to fill out that stack with oneAPI but do it in a way that’s much more favorable and open to the industry and their innovations,” he said.

It’s a tall order to take on Nvidia, which dominates the market for client and server GPUs as it continues to report double-digit growth and record revenues.

But Gelsinger said creating a new, viable alternative to Nvidia is an important mission to him. In fact, he sees Nvidia’s dominance in AI as a bigger threat than the one posed by British chip designer Arm and the new wave of server CPUs it’s enabling.

“The architectural disruption that I‘m more concerned about in the data center is the AI one, not the Arm one, and in that sense, hey, [Intel’s] products are starting to come forward to put pressure on, I’ll say, an uncontested Nvidia,” he said. “Well, they’re going to get contested going forward, because we’re bringing leadership products into that segment.”

In his interview with CRN, Gelsinger talked about how Intel’s recent reorganization will give more attention to Intel’s “under-focused” graphics and network businesses, why he decided to wind down the RealSense business, how Intel plans to win market share from Nvidia and why Intel is looking at building new paid software services and purpose-built systems in the future.

Can you elaborate on how the organizational changeswith the Data Platform Groups and the formation of the new software and graphics groups will “accelerate” Intel’s execution and innovation?

As you think about it, for the most part, these three areas were all under the old Data Center Group, and in the middle of it are the cloud and data center, which hey, this is competitive now. There is no confusion on [Datacenter and AI Group Leader] Sandra [Rivera‘s] part. It is cloud, enterprise, data center, deliver the greatest Xeon product line to go be successful in that space, deepen the relationship with the cloud vendors. She has a very, very focused remit of what her assignment is.

Now, when you think about the networking side of it. Unquestionably, we‘ve started to have some very, very nice positions. The IoTG business for us, very successful as you saw [in] the last couple of quarters, very substantial growth rates. We’re the unquestioned leader in areas like ORAN and VRAN for 5G infrastructure and get a very good uptake for that. Our SmartNIC program — the Tofino, Barefoot assets — are starting to get traction, and it‘s simply a rounder inside of the data center business. And I had the opportunity to put maybe one of the most respected leadership innovators driving our network business in Nick McKeown. He’s my neighbor. I kept cajoling him as we took long walks in the woods together: “Quit telling people how to fix the networking industry, come and do it.” If you [had to] pick a more respected leader in networking today, you’ll be hard pressed to find somebody that has more respect, more honors, more recognition than Nick. So I have an under-focused asset because as part of this big thing now with one of the world’s great leaders in this space, wow. These are good, solid business assets for his leadership there, so I’m very excited about that.

On the other side, we simply have not been focused enough on areas like high-performance computing, where we really [need] to let our technology shine in that sense. So moving those [graphics and HPC assets] over to Raja [Koduri], there is a high affinity. GPUs get used in two places predominantly, one in AI training centers inside of cloud, the other is in high-performance computing, so I brought that together under Raja, so he has the integrated graphics, discrete graphics, GPU and HPC assignment. And this is who he is. If you talk to him for more than three minutes, he just drips with these thoughts and ideas. So again, it really was taking two areas that were not getting as much focus inside of the company, bringing key leaders in place who are passionately focused on what we think are just great business opportunities for us.

And then the third play is what we‘ve done with Greg Lavender. And now I have a highly respected software leader running thousands of people that are doing industry-standard software, core reference platform being pulled under a very respected software leader. And some of that was under Raja, some of that was under the Data Center [Group] before, so we’ve now created essentially four organizations that have very, very world-class leaders, specific market assignments in front of them, a good set of assets, and I think we’re going to see pretty tremendous results that come from it. I couldn’t be happier with how it’s taking shape already.

An Intel spokesperson told CRN in August that the RealSense business was being wound down. Looking at Intel's product portfolio, obviously it's changed a lot over the years, and there have been expansions and divestments before you joined as CEO. What is your philosophy for managing Intel's product portfolio when you're making decisions like you did with RealSense?

What you‘ve seen me do is, I now have six business units, plus one, which I’ll describe in a second. You have the data center business, the client [PC] business, the network [and edge] business, the foundry business, the graphics and HPC business, the autonomy and Mobileye business. Those are my six business units. We’re exiting the NAND business, so that‘s the plus one with the SK Hynix deal.

Simply put, to each of those business leaders, they have a clear market focus. And if the assets fit one of those six business units, then I want to invest in it. If it doesn‘t, then I don’t. Right? And then how do I move out of it? It’s sort of one by one, we’re just going to rationalize everything into those business units. An example would be silicon photonics: Hey, [it’s] going pretty [well], and it’s pretty foundational for where we see the next generation of networking going, so that’s part of the network and edge business unit. You’re going to see us double down in that area. [It] feels pretty good in that respect. Where others, like the RealSense decision: Hey, there [are] some good assets that we can harvest, but it doesn’t fit one of those six business units that I’ve laid out. So we’ll simply be rationalizing all of our assets, all of our investments, all of our resources against those six business unit focuses.

What does Intel need to do to win GPU market share from Nvidia in segments like gaming, AI and high-performance computing?

Deliver great products. Period. Full stop. Nvidia had essentially a 10x or better performance leadership for a decade. If you have that, a 10x leadership for 10 years, people are going to take advantage of that. And then they got really lucky: AI happens. A 30-year overnight success, and they harvested it really well at that point. So they worked hard, they earned and then they got lucky in that respect.

So what do we have to do? Deliver great products in those segments. [They have to have] compelling features, performance, power at the right price points with the right software capabilities to go with them. And the market‘s hungry for us to deliver them an alternative. We need to then deliver it with unique, differentiated value-add. If [a customer comes] in and says, “Hey, they’re delivering 100 [tera operations per second] and you’re delivering 100 TOPS, why would we want to work with Intel? I already got this one over here.”

Well, we better show up with some differentiation to our strategy. And in the GPU business, we go to the customer and we say, “Well, guess what, we just happen to be the unquestioned leader in integrated graphics. You already qualify all of our stuff all the time for every unit that you‘re going to ship, and we’re going to make it seamless to go from integrated to discrete on the Intel platform. And even better than that, we’re going to make integrated and discrete work together. So if you have three [execution units] worth in the integrated [GPU], then you have 10 EUs worth in the discrete, we’re going to give you 13 EUs worth, and you’re only going to buy 10 EUs worth in the discrete GPU, and you’re going to qualify one product that [works] seamlessly between those two.” Well, that’s pretty differentiated. And that’s just one example.

But you have to deliver great products. They have to be clean, well validated, supported by ISVs, etc., create some differentiation and value and then build on our great channel partners and the programs that we have with them, our strong OEM presence — and we‘re going start to see the market respond very favorably to that. And you go segment by segment. Similarly, in the AI training space, customers, hey, they want an alternative at this point. They’re begging for a good alternative at this point, and we’re going to give them one. And in that, we have our Habana Labs' Gaudi instances now going live on [Amazon Web Services], and we‘re starting to get the first customers turned on. We’ll have more of the HPC GPU solutions [like Ponte Vecchio] coming forward to the marketplace. And over time, I‘ll say, we view those as the breakthroughs as we start to establish those presences, we just got some really cool stuff on the horizon that [will make] people go, “Oh, this thing got some sizzle to it as well.”

Intel oneAPI has been positioned as kind of the answer to what Nvidia has done with CUDA, except instead of dealing with just the GPU, it also interfaces with the CPU and other components. Does oneAPI have what it take to steal mindshare away from CUDA in the developer ecosystem?

Think about it this way: oneAPI goes much lower in the stack than CUDA does. And CUDA today, it goes higher in the stack than oneAPI does. So to some degree, they‘re overlapping, but they clearly have discrete areas of capabilities between those. And what you’ll see us do is we will take oneAPI to the top of that stack. So we’ll fill it in, and we’re also going to be very aggressively embracing the open standards of the industry, like PyTorch, like oneDNN, other things like that, because deep in the Intel historical philosophy [are] open interfaces, open APIs, open standards from the days of Wi-Fi, PCIe and USB, etc. You will see us be the stalwart for those open ecosystems in the industry. Nvidia has become too proprietary, and that’s widely seen in the industry, and so we’re going to fill out that stack with oneAPI but do it in a way that’s much more favorable and open to the industry and their innovations.

When you announced that your new graphics business unit would be called the Accelerated Computing Systems and Graphics Group, it brought to mind what Nvidia is doing with DGX, its line of purpose-built AI systems that run on the chipmaker’s GPUs. I know Intel has traditionally sold servers at lower levels of integration, but does Intel have any interest in building its own purpose-built systems for accelerated computing like Nvidia does with DGX?

Will we do some systems offers? Well, we already are with what we‘re doing with Habana Labs as we’re bringing those offerings to the marketplace. Will we do more? Probably so in that respect. But if we go back to the answer to the earlier question, I largely want to be an enabler of the ecosystem, not a competitor to the ecosystem. And historically, if you go way back in time, one of the programs I ran very early on after taking over the [Digital] Enterprise Group at Intel was the industry-standard server definition. Well, the industry-standard server became the platform for Xeon, became the building block for every data center, became the building block for the cloud, and we wouldn’t have the cloud today had we not done the industry-standard server definition.

And in the early days of that, we delivered servers: here‘s the box, here’s the reference design, here’s the BIOS, here’s the APIs, here’s the toolkit to go with it. And today, we don’t deliver servers [like Dell and Hewlett Packard Enterprise does]. We have a vibrant set of industry partners that deliver the platforms [while Intel sells white-box server systems].

What are we doing in OpenRAN [for 5G infrastructure]? Again, we‘ve gotten highly prescriptive with the definition of what the platform is, building it out, what’s required for 5G, how do you do massive MIMO acceleration and quality of service and all these other type of things that go into it. And should I deliver that as an Intel-branded product or not? Well, for the most part, we’re enabling our ecosystem to do that. Again, we go and standardize it.

So when I think about things like DGX, my general philosophy is do the same thing. Define the system, build the system, get the early market going but then enable a rich ecosystem to do it. And I think ultimately that is the partner-friendly thing to do in that regard. Some of it’s OEM-friendly. Some of it’s partner-friendly as well. But I think in that regard, you‘ll see our strategy contrast with what Nvidia is doing to be much more ecosystem-friendly in our approach.

So you would see such offerings as more of a reference design as opposed to an appliance?

Yeah. Now that doesn‘t mean to get the reference design ball established, hey, I may need to do some appliances early on to help kickstart the industry, but I’m going to do it in a way that even when I’m building appliances, it’s sort of like, yeah, yeah, go take my appliance but then get me out of the appliance business. Because those are areas that we see the breadth and innovative aspects of the ecosystem do. Do I think I’m ever going to do the high-end training system appliance for the Chinese market? No, I don’t, at that level, and I’m going to lean on my partners in that respect. Do I think this is an area that Dell or [Hewlett Packard Enterprise] is going to be a good leader in over time? Yeah, I do, and I have to enable and help them do that.

When you were CEO of VMware, Nvidia was important strategic partner, and over the last year, we've seen that partnership bloom. It has resulted with vSphere enabling the virtualization of GPUs, and this year Nvidia launched its own software suite called Nvidia AI Enterprise, which works in tandem with vSphere as part of an exclusive agreement between VMware and Nvidia. I have to imagine you're privy to all these details and all these things that have happened. Paid end-user software services seem to be an important strategy for Nvidia going forward in how customers take advantage of compute, how they manage compute and how they deploy AI applications. Is the strategy of offering paid software services something that Intel is considering more of in the future?

Are we considering it? Yes, of course. Software is clearly making that move to more SaaS-oriented software delivery, so in some regards, it‘s a very somewhat natural progression, and having done a lot of this for the last eight years at VMware, I’ve gotten a deep appreciation for it and the challenges.

I‘d also say again, though, what [are] the ecosystem-friendly views of those services? How do we build on what others do? And even as we might establish some of those capabilities that we do ourselves and deliver [them] as nice compliments to our products, I’d say that’s also part of the reason I brought [CTO] Greg Lavender in [as head of the new Software and Advanced Technology Group], somebody who builds software at scale and [knows] what it requires to deliver SaaS services at scale. So beyond that, I’d say, we haven’t laid out specifically definitive plans at this nascent phase, but I do expect that you’ll see more in that area: How do we leverage our software assets? How do we have unique monetized software assets and services that we’ll be delivering to the industry, that can stand in and of their own right? And yeah, that’s a piece of the business model that I do expect to do more in the future.

Three years ago, when Intel was on the search for its next CEO [ after Brian Krzanich left], John Fortt of CNBC called you out as a good fit for the role. But then you responded on Twitterthat you love being the CEO of VMware and that you weren't going anywhere else. The part of the statement I want to focus on is what you said after that, which was, “the future is software!!!” Now that you are back at Intel, have you changed your mind about that statement?

Well, I‘ll make two little comments. One is maybe self-deprecation a little bit at this point. But I’m the sitting CEO of a software company. What could I say? And as a software CEO, what I say at that point matters for a publicly traded company called VMware as well as what my interest may or may not have been at the time in Intel. So first, there’s a little bit of contextualization there that’s appropriate.

[The] second aspect to it is: think about the overall hardware versus software industry. Thirty years ago, hardware was 2x the size of software. Here we are today: Software is 2.5x the size of hardware. That‘s a pretty simple observation that hardware has grown, software has grown dramatically, and software and SaaS and some of the things that we were already talking about in that respect.

And if you go back to the answer to the last question: Are we going to have more software revenue, software products at Intel going forward? Yes. Customers have moved from being focused on the hardware level to the software-level interfaces, and our job now and some of the things that I‘ve learned in my 11-year vacation is delivering silicon that isn’t supported by software is a bug. We have to deliver the software capabilities, and then we have to empower it, accelerate it, make it more secure with hardware underneath it. And to me, this is the big bit flip that I need to drive at Intel.

To deliver a hardware product that doesn‘t have the full support of the software ecosystem already in place. Why did you waste the transistors? Why did you waste the validation time? Why are you wasting this power budget for our customers if you haven’t enabled the software ecosystem? Software is more important. Those APIs are more important. The developer is more important. And in that sense, I have to essentially create that flip at Intel to fully realize what we do in silicon. Not only do we need to deliver the software, the BIOS, the firmware, the p-code, all of those things, the PyTorch support, etc., concurrently, but in fact we need those delivered quarters or years ahead of time so software development can be in place by the time we deliver the hardware that enhances it.

Earlier, when you said these new paid software services under consideration have to be ecosystem-friendly, do you envision this being an opportunity that channel partners could eventually partake in?

Oh, absolutely. Oh yeah, absolutely. Very much so, 101 percent, exclamation point! And by the way, I do think in that regard, and I don‘t want this to sound denigrating, but I think many channel partners got too comfortable with the sheet metal-oriented business models. And the ones that I have the most respect for have, at a minimum, started to deliver more of the software, SaaS and additional services on top of it and, in many cases, built entirely new business practices.

Sheet metal. Can you just elaborate on that?

Way too many of the channel partners have been, “Hey, I‘m delivering the hardware components, the innovations wrapped in sheet metal right associated with it. And if I do any software and services, it’s largely to support the sheet metal underneath it.” As I poke some of my Intel colleagues now, I’ve said in many cases [these channel partners] view software and SaaS almost like wood filler: “We have this perfect piece of furniture that we’re building [with Intel’s] silicon. Oh, we to fill a gap, let’s stick a little software in it to fix it.” It’s like, no! The software is the value! And now we have to enhance it, and that has to be this pretty major shift for many channel partners as well.