VMware-Broadcom, GenAI, and Future Of Edge: Five Questions With Scale Computing CEO Jeff Ready

“We’re kind of punching above our weight class, and that speaks to the partners,” says Scale Computing CEO and co-founder Jeff Ready.

Intel’s announcement last month that it would no longer produce the Next Unit of Computing (NUC), the backbone of edge server environments and one of the best-selling computers in the world, didn’t square with Scale Computing CEO Jeff Ready.

“I’m like what? We sell thousands of these things,” he told CRN. “They’re getting out of the manufacturing part of that business. They already have other partners. You could be an Intel-authorized manufacturer and make them. Which is how most of Intel’s business works. The NUC was its own sort of thing. Early on, no one was making these tiny PCs.”

[RELATED: Intel May License NUC Mini PC Designs To Other Firms, No Plans For Now]

Ready, who co-founded and runs Scale Computing, an Indianapolis-based VMware competitor that specializes in edge environments, said the NUC was an odd line of business for the company. But Ready said it’s been a great opportunity to remind customers that with Scale as a foundation they can always change hardware to match business needs.

“Even if the NUCs were going away, the fact is three years from now that generation of NUC or Dell server, or Lenovo server are totally different, and may not be backward compatible,” he said. “With Scale, it doesn’t matter. You can run a 10-year-old server next to the old one. If you have GPU needs all of a sudden, you just drop it in there.”

Another market factor driving business is VMware’s merger with Broadcom, which has been churning through regulators for more than a year and mired in a second-request FTC investigation since July 11, 2022.

The $61 billion deal has won approval from regulators in the European Union, the UK, Canada, South Africa, and Brazil, but is still awaiting approval from the U.S. Federal Trade Commission.

Ready calls that, “All good news. It’s one of the top three things we have going on in terms of lead generation.”

He talked with CRN about the merger, Intel’s announcement, and how Scale Computing is preparing its customers and partners for generative AI at the edge.

So we all see the news: Intel is no longer in the NUC business. What happens when you see that?

What I wake up to Monday is Intel is abandoning the NUC line of business. And I’m like, ‘What? We sell thousands of these things.’ Fast forward a few days and that’s not what is happening.

They’re getting out of the manufacturing part of that business. They already have other partners. You could be an Intel-authorized manufacturer and make them, which is how most of Intel’s business works. The NUC was its own sort of thing. Early on, no one was making these tiny PCs.

When the NUC first came out, they were new so Intel did it because nobody else was doing it and they were great. Now other people do it. Intel is like, we don’t need to make it anymore. It was a weird line of business for them.

Basically, Intel was the sixth largest PC seller in the world. The other five are all partners of Intel.

ASUS is going to take over that line of business and they were already one of Intel’s authorized manufacturers so it doesn’t really change anything, other than Intel is back to ‘Intel Inside’ instead of Intel on the front.

But in the end it’s more sources of supply. We can strike other partnerships and get more out there, so it’s all great for us. I don’t care who makes the NUC as long as our software goes on it.

It’s nice in a way when these things happen. This is what happens in hardware.

Even if the NUCs were going away, the fact is three years from now that generation of NUC or Dell server, or Lenovo server are totally different, may not be backward compatible. With Scale, it doesn’t matter. You can run a 10-year-old server next to the old one. Got GPU needs? You just drop it in there.

For the partners, they love to tell the story to their customers, ‘Work with us, and we’re future-proofing your environment.’

That’s what everyone wants to hear. They’re not going to be stuck if they partner with Scale and this stuff happens, and it will. Hardware changes.

We were joking about NVMe drives today. Fifteen years ago they didn’t exist. So we have Scale customers who have come with us all that way. You just keep dropping the hardware in. That’s part of the magic of Scale.

It’s been a year now that we’ve been talking about VMware and Broadcom. What’s the conversation like in your market around that transaction?

Everyone is looking for alternative. It’s all in line with what we thought would happen. It’s part of what we saw Broadcom do with other companies. That’s the plan . I envision those pricing changes, however they manifest are going to affect everybody.

I imagine that it’s even harder for mid-market and lower enterprise-type customers to get any negotiating leverage. They’re dependent upon their reseller typically, CDW, HSI, whoever it might be and that’s where I’m hearing to expect significant price increases.

It’s one of the biggest pipeline drivers that we have. It’s top three things we have going on in terms of lead generation. All good news.

When is Scale going to unveil its line of edge-based generative AI servers?

It all depends. My gaming PC has two Nvidia 4090s in it. It is quite capable of running generative AI for a limited set of stuff. It’s got all the horsepower you could possibly want. The generative AI stuff is in its infancy. Almost everything around generative AI now is tapping into ChatGPT.

The state of the technology is such that they’re advancing so quickly. The Nvidia 4090 I have now is way faster than the 3090 I had 18 months ago. If I’m a cloud provider, and I’m going to make a $100,000,000 investment in a GPU farm, so that I can give economies of scale to the people who want to use GPUs, if I just wait 12 months, I can make a $20 million investment and have the same capabilities.

That’s because it’s moving so fast. It’s like CPUs in the 1980s and 1990s.

The training of it uses a tremendous amount of CPU/GPU cycles. That can go in the cloud. Using it doesn’t take nearly the same workload. You could see how we can use cloud resources to train our model and then once the model is trained, its sitting there an you can use that on a simple desktop system no problem.

How are you preparing Scale for generative and this opportunity here in the next 12 to 18 months?

We’ve expanded the software stack to add a lot of GPU capabilities. The two basic ones are you either share it, in the classic, ‘I have one GPU and I’m sharing it across a lot of apps.’

Then there’s what I think is more common, which is pass through: I have one app, or a small set of apps that need GPU cycles. I might have 50 apps that are running in my factory, most of them have no need of that GPU at all. They don’t need to suck up any resources there so you just force it straight through. That allows you to concentrate the power and begin to do things like generative AI.

So providing basically, in the software, the mechanism to access one or a pooled resource of GPUs.

Then, number two, is always cost optimizations. What kind of systems are on our compatibility list that can take GPUs and having a variety of options for customers? One person may be able to get away with really, fairly low-end GPUs.

The cost of the fastest, top-of-the-line GPU and the one, two steps below it, might be a 90-percent difference. If I’m a research lab, then I have to pay for the high-end. On the other hand, if I need to deploy this to 500 locations on a factory floor, if you can get two steps down in price, that’s a big deal.

So we’re always optimizing for the customer use case.

And even in small devices like the Intel NUCs, or the Lenovo MQ90, the Lenovo Tiny is what its called — it’s basically their version of an Intel NUC — some of them, not all of them, can take GPUs. Some are tiny GPUs, some bigger systems can have a full-size Nvidia GPU in there. And we’re seeing customers deploy that.

One of the great things about Scale is you are indifferent to the hardware. So if the NUC went away, which it’s not — but if it did go away — you could put a Dell server or a Lenovo server in the same cluster and it makes no difference to you. We pool it all together.

In this conversation, it’s also true for GPUs. I could have a Scale cluster sitting there. I deployed it on the factory floor or the back of the restaurant. I wasn’t thinking AI at the time, now you need an AI app. What do you do? You just put a device in there that has the GPU and it joins the rest of the Scale stuff. I’m not re-architecting anything. I’m not starting fresh, I’m just adding the resource in there.

Scale had a repeat victory in channel madness, the first-ever in the history of the competition. And as fun as that is, it does say a lot about your channel and the engagement among your partners.

I’m super proud of that. We have a great channel. When you see we can win that, which is a popular vote, then when we win CRN ARC (Annual Report Card) Awards, which again is survey all these channel partners and let them vote. We have thousands of partners, but I don’t have anywhere near the partners that Dell or VMware has. So we’re kind of punching above our weight class, and that speaks to the partners.

I was talking with a partner recently, and he was talking about how much he loved working with Scale. He said, ‘When you come in, you are the foundational product. The customers love it.’ And his lesson was don’t lose that.