Mixing And Matching Hardware To Optimize Machine Learning Workflows
Inside Intel, a little-known, independently operated company has ambitions of becoming the “Switzerland for AI computing” by making it effortless for enterprises to run and move workloads across a variety of infrastructure, whether it’s in the cloud, on-premises or at the edge.
That means making it as simple for an enterprise to spin up an AI compute instance on a Dell EMC server as it is on Amazon Web Services. But it also means creating seamless workflows between the different infrastructure types so that organizations can optimize for performance and cost without needing to spend months “re-instrumenting” software stacks for a new environment.
The Intel-owned company making these capabilities possible is called cnvrg.io, and its CEO and co-founder, Yochay Ettun, said the new offering, cnvrg.io Metacloud, is part of the company’s ambitions to build an AI infrastructure marketplace that serves as a “cloudless cloud.”
“Eventually what we see is Metacloud providing OEMs and other infrastructure providers an easier way to consume their resources,” he told CRN in an interview.
Metacloud is a managed version of cvrg.io’s “operating system” for machine learning, and, in support of Intel CEO Pat Gelsinger’s vow to be “ecosystem-friendly,” it can juggle workloads on systems that are not just based on Intel chips but also based on those from its rivals, Nvidia and AMD.
“With AI, we have many customers doing their pre-processing on Xeon CPUs, and then they‘re doing their training — some of it is done on CPU, some of it is on a GPU — and then they do inference on CPU. So even in a single pipeline, you need this type of heterogeneous compute support,” Ettun said.
In his interview with CRN, Ettun talked about how Metacloud is helping AI developers fight against vendor lock-in, how the new service will help OEMs, how it fits in with Intel CEO Pat Gelsinger’s new “software-first” strategy and why Intel acquired the company last year.