New Intel CTO Greg Lavender Talks ‘Developer Reboot,’ Making ‘More Sophisticated Software’

In an exclusive interview with CRN, new Intel CTO Greg Lavender talks about the company’s ‘developer reboot,’ which includes an unprecedented persuasion campaign for independent software developers, and building ‘more sophisticated software’ to create new business opportunities for partners.

Helping Realize Gelsinger’s ‘Software-First’ Vision

New Intel CTO Greg Lavender is vowing an unprecedented persuasion campaign to convince the world’s independent software developers to embrace Intel as their silicon platform of choice over competitors.

In an exclusive interview with CRN—his first since he joined Intel in June—Lavender said the campaign is part of a “developer reboot” he is overseeing that will kick off in public on Oct. 27 and 28 with the Intel Innovation event, a reimagining of the discontinued Intel Developer Forum.

“We got all the great software and hardware, but how do I enable them to want to choose it when they have a choice, and that’s really where we want to reach out in a new way that historically we haven’t,” he said in the September interview for CRN’s cover story on Intel’s “software-first” strategy.

Lavender said the Santa Clara, Calif.-based company has done a good job working with developers focused on systems-level programming, so the company is now going “higher up in the stack,” where developers are working on a variety of applications, platforms and software services.

This “software-first” focus championed by Intel CEO Pat Gelsinger will require the chipmaker to build “more sophisticated software,” according to Lavender, that will let Intel’s vast ecosystem of partners realize more value from a “total system” perspective and create new business opportunities.

“We need to enable the value of partners and channel and whatever up stack to drive additional value out of that system,” he said. “So therefore, we also have to move up stack with them because they‘re moving to higher levels of abstraction. So when I look at all [of Intel’s] foundational technology—compilers, firmware, BIOS—that’s table stakes.”

Gelsinger hired Lavender in June as CTO, senior vice president and general manager of the Software and Advanced Technology Group, a new division formed that month as part of a restructuring that centralized many of the company’s software efforts under one umbrella.

Lavender previously served as VMware’s CTO while Gelsinger led the virtualization giant. But the two first started working together more than 10 years ago when Lavender ran engineering for Sun Microsystems’ Solaris operating system and Gelsinger was near the end of his first tenure at Intel. The two also worked together when Lavender was CTO at Citibank prior to his time at VMware.

“I guess I wasn’t surprised when he called me up and said, ‘I need you to come over here to help me with my software business [at Intel], because we’ve got a great hardware business but to be competitive, obviously in the AI [and machine learning] space, in the cloud space, we need to bring a software-first focus,’ which he talks about a lot inside the company,” Lavender said.

In his interview with CRN, Lavender elaborated on Intel’s plans to build “more sophisticated software,” enable more value for partners and reach out to more independent software vendors. He also talked about the importance of Intel’s oneAPI unified programming model, how the company’s software strategy differs from Nvidia’s and what kind of new paid software services Intel could introduce.

The work that you did at Citibank and then VMware, obviously you were very much focused on software-defined data centers and networks. How is that experience informing what you’re doing at Intel?

For any enterprise customer, telco, even with virtualization infrastructure—let’s say everybody’s at least bought into the idea of Infrastructure as a Service. Infrastructure as a Service provides that sort of foundational software abstraction over which you get efficiencies on how to manage things. You get more reliability because you can bring more nines with VMware vMotion. Single-node, multi-node failures don’t take your system down. You can still operate.

I think the thing that Intel does, where we provide that foundation with all of our processors—let’s take the server side—with all of our core technology. And so RAS features—reliability, availability, scalability—are fundamental features of the [Intel Xeon] platform, where something like VMware gets to do its thing. And so when you bring those two layers together, whether it’s in a public cloud, whether it’s in a [colocation facility], whether it’s in an enterprise, a telco, you abstract the developer away from the fundamentals that most of us have been working at, the lower-level software stack. So the value transfers up stack, and you get into Platform as a Service, Containers as a Service, Data as a Service, and now we’re all talking about AI as a Service. So as you get into those higher levels of X-as-a-Service abstraction, you still need that fundamental layer [to be] rock solid, even more so. But at the same time, you need it to scale, so it’s not about building a single CPU in a single system. It’s multiple CPUs with multiple cores with large memories, and lots of racks of servers that are power-efficient, reliable as much as possible, and then that software layer has to be always on.

And so I think with Intel, what I bring and where Pat is taking us is, whoever that software Infrastructure-as-a-Service provider is, whether it’s VMware, whether it’s Google [Cloud], whether it’s Amazon [Web Services], whether it’s [Microsoft] Azure, whether it’s private clouds, hybrid clouds, we’re going to be there. We have some competition obviously from AMD, but it’s OK, we like competition. It makes us smarter, makes us work harder.

But we have to move the value chain up the stack, because all the things we do, which is foundational software, is now taken for granted. We still have to do it well. It has really good quality—we can’t break. And at the same time, we need to enable the value of partners and channel and whatever up stack to drive additional value out of that system. So therefore, we also have to move up stack with them because they’re moving to higher levels of abstraction. So when I look at all the foundational technology—compilers, firmware, BIOS—that’s table stakes.

And so what we‘re looking at now is how can we be more enabling with our partners. [I’ve been] talking to the CTOs of Dell, [Hewlett Packard Enterprise], Microsoft. I was on [a call] with Red Hat’s CTO [recently]: How do we partner [on things like] containers, make containers more secure? How do we get zero trust in these platforms from the client edge device all the way back to the cloud?

So all these higher-level things all become quite relevant, and we could open up our platform from a power management perspective, security perspective to give those application layers opportunities to take advantage of our [silicon] root of trust, take advantage of the capabilities we can provide to help do dynamic power management or what have you. We have these efficiency cores and performance cores [in the upcoming Alder Lake client CPUs]. Don’t just leave it up to the operating system to decide what to do. Let the OEMs, ODMs and the channel partners have access to APIs that can do whatever they want to do, let’s say in the telco or in the edge. It really is about us becoming more heterogeneous and then providing more sophisticated software to let other value chains realize the value from that total system, not just a single processor or a single box.

When you’re talking about developing software that’s further up that value chain, are these things that are in development and we haven’t really seen yet?

It’s because we‘re working with [Microsoft] Windows and we’re working with Linux [partners]. We’re working with our existing partners—both server and client PC partners—to enable all those capabilities. So I will say that Windows 11 would have those capabilities. It takes a couple years to incubate this stuff and get it into the ecosystem. With the Intel [Innovation event] in [late] October, we’ll be announcing some of the things.

I asked Pat Gelsinger (pictured) if Intel is looking into new paid software services or products, and he said yes, and I’m just wondering, when you're talking about going up the value chain, is that where the potential is for future paid software services?

Let me give you a VMware perspective on where that potential is. So everybody’s talking zero trust [architecture]. So let’s take the last 24 months, [the pandemic happening during most of it], and the whole SD-WAN and the whole SASE model for the edge, with everybody going and working from home and companies that want to ship thin-client devices with [virtual desktop infrastructure] or laptops to maybe millions of workers. Everyone’s sort of racing to how you secure that edge, and lots of it is just cobbled together out of necessity. So companies like Zscaler took off. Zoom took off. All of this value was created in the market due to this immediate distribution of everybody.

But there are better ways to secure that edge at real zero trust, for example, with secure computing, trusted computing capabilities, which we can provide at the device level, let’s say in every laptop. We have it in servers, but [let’s focus] on laptops. And so these new ways of working have opened up new vulnerability paths. If you have a zero-trust model where you don’t have a bunch of bloatware running on your PC or Windows environment [in the corporate world], the IT guys—and I knew this because I ran cybersecurity engineering at Citibank. You have all this stuff running on your laptop to make sure that you’re not infected, but it takes up like 40 [percent to] 50 percent of your CPU just to do that, and so you only get half of the value of what you got there. But if you can unload all that stuff and you can actually have it ‘built in’ and, more importantly, you can be running [machine learning] inferencing models, detecting anomalous things happening at an edge, you can then provide that telemetry back to an IT department, to a service, VMware’s Carbon Black, for example.

And so again, because we have the platform and we sort of know everything that’s running, and we know everything about the power and the battery life and all this other stuff, we can serve that telemetry up into some SaaS service. It doesn’t have to be ours, but it could be somebody else’s or a partner’s, where they could potentially start to monetize that. Now we have those capabilities that [are] coming because it takes awhile for that to get into the ecosystem, whether it’s Chrome OS for Chromebooks and Windows for most of the laptop ecosystem. But those capabilities are in our platforms. We just have to work with our partners to bring them forward in new ways they can create new business models, and there [are] possibilities of revenue sharing there.

It sounds like capabilities that Intel could develop for itself to monetize but also for partners to monetize as well. Is that correct?

If that’s [the partner’s] core business, they can run the service, but we’re the enabler of the service, so maybe there [are] some opportunities. We haven’t worked those things out yet. I’m just painting a picture of what the future [could] look like. People before, the classic model was you run Symantec or McAfee and a bunch of other stuff, agent technology, on your laptop, bogging it down, because everybody tried to lock it down, but still most ransomware comes through peoples’ laptops. Plugging into somebody’s network, that’s how it starts. I saw that at Citigroup.

So the more we can secure that device but also provide intelligence from the device and feed that intelligence back because a lot of [those things are] false positives. You want to feed the useful stuff back. They can then take immediate action, for example: Quarantine the device, lock it from the corporate network, not enable it to access certain corporate apps. So you have a way of having a graduated set of actions that you would take as opposed to just [shutting] it down. There [are] these ecosystems evolving, and it’s no longer just you and your laptop, especially if you’re a corporate user. That laptop’s corporate property. It plays into a corporate network ecosystem, but you’re roaming around in an unprotected environment, not the corporate environment. So there’s got to be new technology, new software, new business models, new AI to make sure those devices are within the policies set by the security people.

How important will it be for Intel to bring in revenue through software offerings in the future?

I’ve only been here about 70 days. That’s on my to-do list. Pat has talked about a ‘torrid pace.’ Pat is moving very fast. I did my [reorganization] in 60 days. Most people take 100. I did it in 60 days. And so I’m really optimistic about the opportunities of what’s possible. But first we have to educate the market too that these capabilities don’t always require specialized compute. We can run a lot of machine learning and deep learning workloads on a Xeon processor with really good performance. And so you don’t have to have this other expensive thing generate lots of heat and power.

But when you want that, through cloud vendors [like Amazon Web Services] providing deep learning platform architecture, we have a custom silicon with our Habana Gaudi technology. We’ll have a competitive GPU in the market next year, and then I think we’ll look at what ways—if we want to compete in particular verticals, maybe paid software in a particular vertical makes some sense. But at the same time, I think it’s still a little fuzzy about where that should be. So I think from my perspective, there’s lots of great technology going on around the world, including in Israel. There’s just amazing technology and innovation happening in this space, so it’s still very early. And so I think, if you look at the evaluations of some of these things, and they’re really early. You want to let things settle down a little bit to see where the real value is going to accrete and then go after that.

So let’s talk about the channel. How can vendors like Intel enable new business models for channel partners? Looking at how you're going up the value chain with software, what are the other categories in which your software efforts will enable new business opportunities for channel partners?

So we have myself and Sandra Rivera, who‘s the head of our Data enter and AI Group. She and I co-own for Pat an AI strategy and execution program, with me on the software side and her on the hardware accelerator side. We have a number of accelerators that we announced at Intel Accelerated. Our Ponte Vecchio GPU. We already have our Habana Labs Gaudi processors that have been previously announced and are going live in [Amazon Web Services] this fall. And so both from a training [perspective], what’s known as deep learning, and from an inferencing perspective, we’re moving up the value chain of having hardware accelerators in our platform, having a rich software ecosystem that we provide that takes advantage of all those hardware accelerators. And my team pushes them up into all the compilers like GCC and LLVM and Pytorch and TensorFlow and ONNX, which Microsoft’s using a lot.

And so we’ve taken all this accelerator software that tickles our hardware in the optimal way, and we push it all out into the open-source communities. We work with key [independent software providers] to make sure that not only do they have [Nvidia’s] CUDA [parallel computing platform], they have data parallel C++, which is being standardized through ISO C++. So we have an alternative—playing catch-up, we get that—to customers when our GPUs are available next year, so that they have all that stuff prepositioned in the ecosystem. And through our oneAPI, which is the technology I own essentially, you can run that on an FPGA, you can run it on a GPU, you can run it on a Xeon [CPU]. In our client platforms, you can run it on the integrated GPU that’s on a laptop, for example.

So we’re really trying to say, ‘OK, we [have] to enable AI and [machine learning] acceleration for [training] and for inferencing. For servers, it’s going to be more for [training], and clients, more for inferencing. But the ability to kind of run these models—natural language processing models, recommender system models, like what Facebook is doing—so we got to be able to label all of our platforms, no matter where they fit into the channel, so that we have those capabilities. And then we’ve got some open software that’s really popular called OpenVINO, which is on the edge and IoT, AIoT, industrial edge side. Really popular. That’s a fast-growing business for us.

So when we enable those platforms, it creates again other opportunities for all kinds of business models on top of that. So like camera surveillance: We don’t want to be the camera surveillance company. We can enable machine learning at the edge and interpret video information. [Channel partners] can go back to some surveillance company and enable some value. What we’re trying to do is build that software-hardware stack. You get the hardware acceleration because we can optimize everything, and you get the capability and the ability to extend it, because we’re trying to enable the open-source ecosystem as opposed to being a proprietary solution, so anybody can build on it.

Pat Gelsinger talked about Nvidia being ‘uncontested’ in the GPU and AI space and how he really sees Intel contesting Nvidia strongly in the future. Nvidia has really built out a pretty comprehensive software stack over the last several years. It starts with CUDA as the foundation, but then you have all these various frameworks. They have different things for applications like conversational AI and medical imaging. Then, of course, they now have paid software services that can let people take advantage of GPUs within a more traditional data infrastructure. Are there expectations in place for when Intel needs to be at a certain level of competitiveness against what Nvidia is offering in terms of going down the list of capabilities and features?

To be honest, they had 10 years to essentially create the market, setting aside Bitcoin miners and gamers. They’ve been uncontested, that’s true. Pat has said that. But again, I don’t think we feel like we have to go head-to-head with them on every one of those verticals. If we give everybody all these capabilities, let’s say we have a competitive GPU equal to [Nvidia’s] A100 and whatever [future products are on the] road map—because we’re not just doing one, we’re going to do a better road map as well. Now we’re playing catch-up. We acknowledge that, but we have really smart people here. Some of the smartest I’ve worked with in my whole career, and I’ve worked at Sun Microsystems, Cisco [and] research labs, so a lot of smart people. So it’s not a shortage of brainpower. We have a shortage of time because we’re playing catch-up.

But again, we have the ability to run those workloads not just on a GPU, but on a CPU, on a SmartNIC. We have our XPU model. We can do it on FPGAs. We can do it on SmarNICs. We can do it on Atom [CPU] cores. We can do it on our laptops. We can do it on our servers. We can do it in the cloud. We can do at the edge. We can do it in telco. So our goal is to enable that ecosystem. We have a broad reach of capabilities, but we get consistent about the software, so you’re not having to learn all the new software things. And we do it through the open ecosystem, so you don’t have lock-in.

Nvidia’s done a good job of creating the market, therefore they have lock-in and therefore they have price leverage with their customers. And if we can come out with competitive products, which we expect to do, and we enable the same software ecosystem that the rest of the world is using, and we can run the biggest models competitively, and we can do all the inferencing—because inferencing becomes ubiquitous on every platform, so there will be inferencing acceleration on any place compute is happening, including your phone—so we can enable that and then we can bring a software ecosystem, a higher-value software ecosystem and let other people make money, not just Nvidia, then I think that the market will accept that.

How central is oneAPI to all the various new software efforts that Intel is doing?

OneAPI is both a brand and a set of technologies. And I’ve been doing deep dives across the whole [company]. It’s actually a tremendous amount of stuff, and we put this oneAPI label on it, but it’s still very much what I call market-enabling technology. It gives you all the libraries, the tools, the sophisticated debuggers because we can run these parallel programs on GPUs or parallel programs on CPUs with lots of cores and lots of memory and lots of hardware acceleration. The ability to debug that stuff is nontrivial, and we have one of the best debuggers, I think, in the industry, because we can debug the hardware, we can debug the FPGA, we can debug the GPU, we can debug the x86 processors. I don’t think you can find another debugger that does as much as that does. So that’s one of two dozen capabilities within the oneAPI ecosystem.

But I think of it as an ecosystem of technologies that has this label called oneAPI. And right now, we don’t charge for it. We charge for support if you want support, but we enable this to basically enable the partners and [operating system] vendors and the supply chain people to get the maximum advantage of all the hardware we deliver. But I think we’ve got to do a lot more because […] there are 24 million developers roughly in [world, based on] the last charts I’ve seen. And let’s say that 6 million of them are systems developers: people at Samsung, Bosch, automotive companies. They’re all using our stuff. You just don’t read about it on Stack Overflow and in the general press. They’re all using our stuff because they’re doing all this systems-level programming. But what about those other 18 million developers?

So we’ll be coming out with what I call a ‘developer reboot,’ and some of that will come at Intel On, [the company’s upcoming innovation event in late October], about how we reach and acquire the developers higher up in the stack, primarily focusing on the AI [and machine learning] developers or the model developers. We got all the great software and hardware, but how do I enable them to want to choose it when they have a choice, and that’s really where we want to reach out in a new way that historically we haven’t. It takes a new kind of personality and a new kind of way of communicating, evangelizing and enabling and reaching out to that higher-value developer [who’s] writing in Python [and] doing machine learning in Python. They don’t care if it’s an AMD processor or an Intel processor, an Arm processor. They want the fastest, cheapest one, so we have to differentiate our value higher up in the stack than we’ve traditionally done.

So you’re saying that developers have not cared about what brand is underneath, but it sounds like you do want to make them care?

Let’s put it this way: If I’m greener because I have better power efficiency, I’m faster, and every piece of software that you want to run just works. And you don’t have to tweak it, do anything, re-code it. Why not? In other words, if I’m faster, cheaper and greener because I can control the power, I have APIs to let you do power management, so if you want a green cloud, you could do that. We have efficiency cores [with the upcoming Alder Lake CPUs], which means not everything has to run on the performance cores; you can run some things on the efficiency cores. And I have an accelerator sitting right there for your machine learning workloads. You don’t have to go on some other box. You just take your containers. We have a technology we acquired called You can [put] that container on any cloud, and you just pull all your machine learning workloads and [place them] across 150 Xeon servers. You can do a lot before you hit the GPU.

So that way you’re greener, [require] less power, still get value, but I’ve got to go educate a lot of these developers that they can do that because they don’t know about it because we haven’t told them. We have to change our engagement model with the developers to educate them on what we already have, and then there [are] new things we can deliver and bring to them as well. So if I think of every developer as a partner; we need to serve them better.

As Intel’s CTO and the head of the Software and Advanced Technology Group, what do you have to do to make Intel the unquestioned leader in every category?

Well, first of all, like I was just saying, we’ve got to show up in the right place. I have to meet developers where they are. I have to meet them wherever they are in the cloud, at the edge, in the enterprise, at the ISV, in the open-source communities. So No. 1, we’ve got to do much more outreach, be more visible in all the places that developers hang out, both technically and socially. So part of me as the CTO is almost ‘chief talking officer.’ I just have to go out and talk about all the things we’ve already done and what we’re doing more just so that people know, because we’ve been very introverted about it—I think is the way that the way to say it. Pat‘s called it, ‘the geek is back,’ but the geek also has to be visible.

That‘s No. 1. And I think No. 2 is, within the company, depending on where you count, [we have] like 13,000 to 15,000 software developers. I have to also evangelize them to do more, to drive more software value off of our hardware assets. I have people doing BIOS and firmware. I have people doing compilers. I need other people moving up the stack, so that’s going to be both me acquiring additional people to come into Intel to do that, but I’ve got to attract them with the story. As CTO, the chief talking officer part of my CTO job, it’s really evangelism, internally and externally.

Secondly, as the technical officer, I’ve got to make sure that we’re consistent across our platforms, consistent across our software assets and we work together—because [there are] 110,000 people in this company—we work together to meet the market where it wants to be. And so I’m working really closely with all the business leaders, all the hardware teams, the sales and marketing organization to make sure that we’re showing up with the right technology for the right problems with the right quality, the right security, the right scale in the market, because, guess what, security wins all the time. Security and performance still matter. And we think we can deliver that better than anyone else.