Intel's Lisa Spelman: Why Optane DC Is Winning Over Customers

'The part that's been interesting has been—and I feel good because I said this before launch—was once we get the product in the hands of customers, they will do things with it that we didn't think to do,' Intel Xeon and Memory Chief Lisa Spelman tells CRN in an interview.

High Conversion Rate For Optane Proofs Of Concept

Lisa Spelman says Intel's Optane DC persistent memory is winning over data center customers running proofs of concept at a high rate as interest expands beyond the virtualization and in-memory database applications that got the new memory type up and running.

"One of the things that we're just getting into now is the conversion rate, so we're getting high 80 [percent to] 90-plus percent conversion of proof of concept into deployment, which is insane," she said in an interview with CRN at CES 2020 earlier in January.

[Related: Intel: Xeon Supply Is Not Constrained, Delays Can Happen]

In her new role as vice president and general manager of Intel's Xeon and Memory Group, Spelman is charged with leading product strategy and marketing for server workloads that can take advantage of Intel's Xeon processors and Optane memory combined. The group was created in November when Intel quietly reorganized its Data Center Group into the Data Platforms Group.

Last year saw the official launch of Intel Optane DC persistent memory, a new tier of memory that combines the persistent qualities of storage with performance that nearly rivals DRAM. The company is hoping it will give customers extra incentive to buy Intel's second-generation Xeon Scalable processors as it faces greater competition from rival AMD since it's only compatible with that processor family.

While it's still early innings for Optane DC, Intel has already racked up a couple of major wins for the product line: a multiyear partnership with SAP to optimize the software company's applications on Xeon and Optane as well as a deal to put Optane DC into Oracle's new Exadata X8M sever platform.

In an example of how Optane DC can benefit certain workloads, Intel has previously said that the memory type can reduce the data recovery time of a 6-TB SAP HANA instance from 50 minutes to four minutes since the product's persistent qualities make it more resilient to server outages.

In her interview with CRN, Spelman highlighted Intel's more than 200 ongoing proofs of concept with large enterprises using Optane DC and said this year is about expanding the market for the persistent memory product line, which happens to be suited for high-performance computing with large data sets.

"The part that's been interesting has been—and I feel good because I said this before launch—was once we get the product in the hands of customers, they will do things with it that we didn't think to do or honestly we ran out of technical resources to do," she said.

The company also plans to launch a next-generation version of Optane DC persistent memory this year while it works closely with SAP on the design of a future generation.

"They're working together on what do we need to do with the hardware changes we're making on the software side for the next-next generation," Spelman said.

What follows is an edited version of CRN's interview with Spelman, who talked about why Optane DC is winning over customers, how Intel is working with SAP, which applications Intel plans to target next with Optane DC and why channel partners should stick with Intel as AMD finds momentum.

How would you describe Intel's year in the data center last year? Were there any certain themes? And what are the themes for Intel's data center business in 2020?

In 2019, for us, it was really all about the ramp [of Intel's second-generation Xeon Scalable processors]. Getting Cascade Lake in as many hands as possible, and the platform around it. So I talked before about all the things we did on the CPU side, and the biggest one in this generation was the [Deep Learning] Boost, and we ran tons of [artificial intelligence] projects with customers: get the testing done, get all of the frameworks optimized and start turning those into production deployments.

But the other super cool thing that we worked really hard on last year was the introduction of our new Optane memory technology into the platform. Bringing new stuff to market is always fun. When you're bringing new stuff that redefines how application architecture is done, that's even more fun. But it's real work. So we put together a team as part of our memory team. That was really their sole focus. They're the solutions architects, and they're just completely customer- and application-focused, and they were out trying to find all the proofs of concept that fit within the right value propositions. We were pretty specific about the ‘Xeon plus Optane memory together delivers these results in this environment, and so let's work with you, find if that environment fits, start the proofs of concept.’ We've done over 200 proofs of concept, and we have this huge pipeline of more.

One of the things that we're just getting into now is the conversion rate, so we're getting high 80 [percent to] 90-plus percent conversion of proof of concept into deployment, which is insane. But some of it goes back to that targeted focus, around saying, ‘For these types of in-memory database applications and for these types of virtualization use cases, here's where we know it's going to deliver either total cost of ownership you want to see, or it's going to deliver the persistence that your application needs, or it's going to deliver a higher performance.’ Whatever the case might be or may be a multitude of those, so that part's been great. But that's what we set ourselves up to do.

The part that's been interesting has been—and I feel good because I said this before launch—was once we get the product in the hands of customers, they will do things with it that we didn't think to do or honestly we ran out of technical resources to do, because we're trying to get deployments going and the proofs of concept. So at International Supercomputing this year, which was in June and right after the April launch, we had some people starting to pique their interest and test it. And then by Supercomputing [in November], we had people saying, ‘You know what, this has a real play for large [high-performance computing] data sets, and there's some interesting things we can do there.’

We had high-performance computing workloads and some of the AI workloads on our radar, but we hadn't really dug in. And so now we've got these startups. You've got these data management companies. You got these HPC companies that are coming in and saying, ‘Hey, by the way, I grabbed a handful of these, and here's what I have done.’ And so we're starting to see the industry pick it up for different use cases and start helping us scale it there as well.

It's a bridge thing because it was so exciting to have been part of a product family for such a long time, get it to market, drive the proofs of concept—and then in 2020, start ramping it not only in the areas where we knew we had a definitively good use case but also in the areas that our customers are telling us and our ISVs are telling us are good use cases. We have a lot of work to do in 2020, we're going to continue to ramp the current generation, and we're also going to bring to market the next generation. And so we'll find customers on using both.

Can you talk about Intel's work with SAP on optimizing in-memory database applications on Optane?

So SAP and SAP HANA, it's such an interesting application. It's how so many companies literally run their business. But they're a very technologically advanced company. They're always trying to find the next thing, figure out what works. And so, interestingly, we're working with them on artificial intelligence and this memory stuff. We've been working with them on the Xeon side about AI being built into their application and how we accelerate that. And then we've been working with them on the memory side about how we optimize more and more of the application.

We've been working with them for 10 years ahead of the launch to bring persistence to the HANA product line and figure out what we're doing. Once it launched, we actually signed a longer-term joint collaboration agreement—it's one of those things where you see the potential, but it's not until you actually get it in the market and start testing with customers that you say, ‘OK, this is the real thing.’ That's been great to take that technical collaboration a level deeper.

So now they're providing input at an architecture level for future generations, and then we're doing work with them, like we literally have our own software engineers sitting with them, making sure that they're taking maximum advantage of the next generation that's coming out because that one was pretty well defined by the time we signed this. And then they're working together on what do we need to do with the hardware changes we're making on the software side for the next-next generation.

So they're giving input for what they need?

Yeah. Deeper design input is what I would say: to more optimally deliver for the application. Then the other thing we did was with Oracle. We won their Exadata business, which is fantastic because then we have this great coverage of [what I call the] enterprise market—it's more like an enterprise use case because people are doing a ton of those workloads at cloud service providers or in a cloud architecture in their on-premise data center. It's cutting across all those.

Besides HPC applications with large data sets, are there any other applications or use cases Intel is looking at this year for Xeon and Optane?

I want to drive further with the team into some of the standard baseline virtualization applications that can benefit from larger memory sizes. They may not all need persistence, and the continued utilization and investment in persistence will happen over time. It's an ecosystem change that will take awhile to happen—and that's fine. We'll just keep pushing it and driving it. But I really think there's a lot more to be done, and that there's a lot more capability in people’s hardware they're landing in their data center than they've turned on, so we're always working with customers—and this might sound counter-intuitive—to increase their server utilization.

Now you can say, ‘Well, if they only use a server 30 percent of the time and then they buy another one, that's good for you,’ which it is. But by and large, we want them to use their capacity more fully and then turn on new use cases and eventually it all works out that they buy more. Like if we had the mindset that every time you offer customers more efficiency, they buy less, we would have never invented virtualization. We would have, because if you say, ‘Oh my gosh, you can load four instances onto one server—end of the world’—[and it turned] out to be one of the greatest accelerants of our business over time. So we don't look at those things with fear. We look at them as opportunity. Like the more we enable, the more we grow.

As far as strategic partnerships like the one Intel has with SAP, are these strategic partnerships becoming more of a focus? Is there a sea change in how Intel is thinking about those?

We've always worked with the ecosystem, but to some extent, yes. I do think we're getting a little bit more strategic about it. And the reality is, we are a hardware company, and that's our foundation, and that's what a lot of people view us as providing. But over the last several years, we've invested incredibly in our software capability, and I honestly think it's been a learning curve that we've gone through inside of the company to view hardware and software as a combination, as so much more powerful than hardware alone. So we used to do the work to enable a new technology in the industry. Sometimes it would be very basic, like if we add this into the chip, it will not break XYZ application. That's a pretty low bar.

So now we're sitting there and stepping back and saying with these key partners and these leaders in the industry, if we add this into the hardware, it will totally accelerate the performance. An area where I think we saw this play out for us in a meaningful business way is the AI space. I've talked to you before about how generation to generation, our hardware improvements in Xeon drove a 2X increase in performance. 2X any time in a generation is phenomenal. Like you're happy as a product manager. You're pretty happy if you got 2X performance. That's not 2 percent. That's 200 percent, right? But then we got to 14X because of the software work we did. And that's like a real ‘aha’ moment for hardware people to say, ‘You know what? I got to make sure I'm investing for this.’

So now when we do product planning, we have software planning combined with the hardware product planning. One of our executives says, ‘leave no transistor behind,’ meaning no transistor on that piece of silicon will go underutilized. So when you said, ‘Is there kind of a mentality change?” I do think there has been.
So it's not completely new?

It's more serious is what I would call it. So we would always have software plans, and we would always have hardware plans, and I won't say never the two shall meet. But I would say we pulled our software planning earlier into the product life cycle, so it's more involved at the beginning. And there are times, honestly, where we look at things and say we could do this in hardware, but it's better done in software. And again, I know that sounds super illogical when you start with the base of us being founded as a hardware company—to admit that a capability might be better delivered in software? That's a cultural change that we've gone through. And I would say it is a difference.

I know Intel has a very large software organization. Is that change in perspective having an impact on hiring?

Where it's had a real impact is, we've always had an excellent team that writes more lines of code than just about anyone in the industry, about getting the base platform out there, doing those enablements and optimizations. But it's been fun to turn on the ideas. So if you look at us culturally, you might say that in the past, years ago, being a hardware engineer was king. It was the lead position, and now software engineering plays an equally important role.

I think that's allowed us to unlock or unleash some real hidden talent that we had inside of the company, and [it] also attracts more and different talent. So the software industry, we have people saying, ‘Yeah, I want to go there because I can get access, I can influence the new hardware and solve my problems.’

Project Athenahas been very important for Intel on the client side. I know laptops are very different from servers, but have any of the lessons from Project Athena gone to the data center side?

Our assets of products and capabilities have allowed us to focus at much more of a solution level. So yes, Xeon is the base of our business, and it's the foundation. But if you would have looked at us several years ago, people would have thought it was unheard of for us to be investing in compute ASICs and FPGAs and silicon photonics. They would have said, ‘Why are you doing it?’ or “That might compete with the CPUs, so they'll never do it.’

And so we take a much more solutions level view now, and a system of systems [approach]. I don't know if you've paid any attention to what we've done over the past couple years with [Intel] Select Solutions, going through and driving workload configurations. It's not as flashy as Project Athena. I'm never going to get to stand on stage and hold a foldable PC—I would have to change jobs for that to happen—but I do get to occasionally do a demo of a Select Solution, which is born of that same mindset: How do we align the industry?

Another small example, CXL, which is the Compute Express Link. That's something where we're driving that across the industry, [saying,] ‘Hey, here's this definition of how we better drive data, move data inside the system.’ And we've got competitors as members of that consortia, we've got partners’ customers—and we're saying, ‘No, this is important for the ecosystem.’ Of course, we want to be best at it—all of those things—and that's our goal. But we are still able to recognize even in a competitive world when the industry as a whole and our collective customer base will benefit from some of that industrywide alignment at the solution level.

For Intel's channel partners, systems integrators, value-added resellers, anyone who's either selling servers or building servers or both, why should they care about CXL?

I think they should view it as just another opportunity to increase performance for their customers and remove complexity. There's so much opportunity in the industry still as workloads grow and proliferate, as everyone needs to go from edge to cloud and back again. Nobody's just doing their one stand-alone data center thing; it's all connected. For those systems integrators, for the channel partners, all of them, to help people that just don't have the resources themselves to do it.

So I traveled through Europe for the end of the year, and I spent time with some of our major customers over there. Some of the automakers, all of that. And I was again reminded and astounded by at times how small the IT departments can be, even [at] major corporations. You're talking to people, and they're like, ‘No, I'm the person for this. Like, I'm the department of this.’ Whether it is performance or security or modeling or whatever. So there is a ton of opportunity to help them stay on top of all of it and really differentiate themselves, for the channel partners, on their value-add.

AMD's ramping up their efforts with EPYC Rome. To the channel partners, what is your message to them as to why Intel is still their most important data-centric partner?

I just look at what we're doing across that data movement pipeline from moving, storing and processing, edge to cloud and back. I just don't know that there's another partner besides Intel that has the capability to address more of their business and more of their customers' needs. And you've seen us. We've committed for over 20 years in the data center space, network space and in driving the solutions, driving the software, driving the hardware, make it all happen, get the ecosystem ready, drive the industry, and then deliver that workload performance. And sometimes we get wrapped up in these benchmark wars and things like that. But at the end of the day, what we're trying to do is make sure that what the customers need gets resolved in the simplest, easiest, cleanest fashion possible. And I don't think you'll find anyone with more capability and history and forward potential in that space than Intel.