Intel Takes On Nvidia With Vendor-Neutral Managed AI Service

The semiconductor giant aims to let data scientists quickly develop, train, deploy and manage machine learning models across any infrastructure with its Metacloud managed service, which is being offered through, an Israeli AI startup acquired by Intel last year that operates independently within the chipmaker.


Intel is upping its arsenal of AI software against Nvidia with a vendor-neutral managed service that allows developers to run machine learning workloads on any hardware, in the cloud or on-premises.

Announced during last week’s Intel Innovation event, the new Software-as-a-Service offering comes courtesy of, an Israeli AI software startup acquired by Intel last year that operates as an independent company within the chipmaker.

[Related: Intel’s John Kalvin On How New ISV Investments Will Boost Partners]

Sponsored post

The new managed service is called Metacloud, and it’s a managed version of’s Kubernetes-based “operating system” for machine learning that allows data scientists to quickly develop, train, deploy and manage models across any infrastructure. The service is in early access now.

The goal, according to CEO and co-founder Yochay Ettun, is to fight against vendor lock-in and allow developers to freely run machine learning workloads wherever they want whenever they want. Developers often seek to do this to optimize for cost and performance, but Ettun said they’re usually held back by lengthy transition processes for moving workloads to new environments.

“We allow customers basically in a single pipeline to run workloads on different platforms, on different architectures and also on different clusters,” he told CRN in an interview. “Like the training could run on Dell, the pre-processing could run on [Amazon Web Services], the inference could be at the edge. We basically make it seamless to run these types of workloads that without Metacloud would probably take a few months or even more to set up and configure.”

In its announcement, the Intel-owned company called out specific platform integrations with Amazon Web Services, Google Cloud, Microsoft Azure, Red Hat, VMware, Dell Technologies and Seagate as well as with Intel software toolkits like OpenVINO. But Ettun stressed that Metacloud is hardware-agnostic and, as a result, can work on systems running on chips made by Intel’s top rivals, Nvidia and AMD.

For that reason, he called the “Switzerland for AI computing.”

“There are no special requirements. You have a Kubernetes cluster. You can connect it to Metacloud. It’s that simple,” Ettun said. “And we don’t care what type of hardware the user is running on and using. Just have it integrated with Metacloud, and you’re good to go. So the CPU, GPU, accelerators, any type of hardware they want.”

To sell Metacloud, is relying on a direct sales force as well as OEMs and their channel partners, according to Ettun. He added that is focused on a go-to-market campaign with Dell Technologies right now that will expand to other OEMs over time. ( has previously announced partnerships with Supermicro and Lenovo that bundle the operating system with servers.)

“They have a big motivation to sell to their customers, because with, they basically get a better-than-AWS experience for their customers,” he said. “So they will sell and promote and then they will have basically more customer reach, a better customer experience, so there is going to be a lot of work with Dell and the other [OEM] partners around that.”

Given Metacloud’s cloud-like nature, Ettun said, Metacloud has the “exact same pricing structure” as cloud service providers, which means that customers will pay based on consumption.

Metacloud will help Intel become more competitive in the AI space in a couple important ways, according to Ettun. For one, he said, it will help Intel provide the “ecosystem-friendly” approach to AI that CEO Pat Gelsinger has vowed. He said it will also help Intel CTO Greg Lavender with his stated goal of engaging software developers more deeply than ever before.

“Intel wants to meet where the developers are, right? Greg was saying that, and is basically at the top of the stack — I think one of the [highest] in the stack at Intel,” Ettun said. “And we interact daily with developers, with data scientists, with software developers that are not driver developers. We meet developers, and that was basically one of the motivations for Intel to invest and to basically grow the business because Pat and Greg are looking to be more close with the developers.”

The new Metacloud service falls in line with Gelsinger’s plan to offer more paid software services and supports his goal to increase software revenue as part of the chipmaker’s new “software-first” approach that was on full display at last week’s Intel Innovation event.

Juan Orlandini, chief architect at Insight, a Tempe, Ariz.-based Intel partner and No. 14 on CRN’s 2021 Solution Provider 500 list, told CRN that platform management system solutions like Metacloud present “massive” opportunities for channel partners because the data center market is moving to a platform engineering model that puts a premium on building efficient yet flexible IT environments.

“Building platforms that do AI are non-trivial tasks, and that is not something that has actually been matured very well, so actually doing that and building a platform that can be consumed and then operated by a platform team that is then ultimately the enabler for the DevOps, DevSecOps, data scientists, all that stuff, that‘s a huge thing,” said Orlandini, whose company is currently evaluating Metacloud. “That whole platform engineering thing is going to be massive. It’s going to be the way we are going to be running our data centers of the future.”

By letting AI developers run their workloads on any infrastructure, Metacloud is helping fight against vendor lock-in, according to Orlandini, which has made it difficult for developers to move their workloads across different kinds of environments.

“The challenge prior to this being available is that if you had made a decision to run this in the public cloud and leveraged only the public cloud’s set of tooling, APIs, software development kits, all the pre-trained models, all that other stuff, you‘re essentially locked into that place,” he said. “And if, down the road, somebody wanted to move that back on-premises, or even build it as a hybrid model, where you’re running part of it in the public cloud, part of it on premises, you had a hard time doing that because you were locked into that public cloud provider’s tooling.”

Orlandini added that Metacloud’s “Switzerland” approach to AI computing will help make Intel more competitive in the AI space against Nvidia.

“I am certain that their stack will run well on Nvidia, but I‘m also certain that it will run exceedingly well on Intel,” he said. “Intel is hedging their bets on having this ‘Switzerland’ approach with something like that will allow them to provide those software services that don’t necessarily tie a developer to either a specific location or a specific acceleration model like Nvidia does.”