Intel Seeks To Win Over AI Developers With Open-Source Reference Kits
The semiconductor giant is trying to gain momentum against rival Nvidia in the AI computing space with open-source reference kits built on the company’s oneAPI platform that it says can train models ‘faster and at a lower cost by overcoming the limitations of proprietary environments.’
Wei Li, Intel’s vice president and general manager of AI and analytics
Intel seeks to hook more developers and data scientists building AI applications with an expanded set of open-source reference kits that leverage the chipmaker’s growing software stack.
The Santa Clara, Calif.-based company announced on Monday it has released 34 open-source AI reference kits that are designed to help organizations across several industries train AI models “faster and at a lower cost by overcoming the limitations of proprietary environments.”
Wei Li, Intel’s vice president and general manager of AI and analytics, said the release is part of the semiconductor giant’s move to enable an “AI everywhere future though not just our portfolio of AI-accelerated processors and systems but also our contributions to an open AI software ecosystem.”
“Intel AI reference kits give millions of developers and data scientists an easy, performant and cost-effective way to build and scale their AI applications in health and life sciences, financial services, manufacturing, retail and many other domains,” he said in a statement.
Intel’s claims against “proprietary environments” are likely a knock against chip Nvidia, whose proprietary CUDA parallel programming platform has allowed the rival chip designer to flourish in the AI computing space with its GPUs, according to Andy Lin, CTO at Houston-based system integrator Mark III Systems, which is a top North American Nvidia channel partner.
“That’s typically who they try to target when [Intel has] any kind of AI announcement. That’s always implied, because—I’m going sound a little bit biased here, but it’s because I believe in them—[Nvidia is] obviously the gold standard,” he told CRN.
What Intel’s AI Reference Kits Contain And Do
At the center of Intel’s open-source AI reference kits is oneAPI, a programming model that is the chipmaker’s answer to CUDA. The company has pitched oneAPI as open and standards-based, giving developers the ability to program and optimize software across Intel’s portfolio of CPUs, GPUs and FPGAs as well as GPUs from rivals Nvidia and AMD.
Designed in partnership with consulting giant Accenture, Intel’s kits consist of oneAPI components, software libraries, model code, training data and instructions for the machine learning pipeline.
The chipmaker said these elements can save developers and data scientists time they would usually spend in the conception, solution architecture and feature engineering stages of a traditional workflow for AI models. By doing this, these users can get started with data preparation much sooner, then move onto training, tuning and deployment of the models.
“Collaborating with Intel to build AI reference kits for the open-source community has led to more productive AI workloads for our clients,” said John Giubileo, managing director at Accenture in a statement. “The kits, built on oneAPI, are designed to offer developers a portable and efficient solution for AI projects, which reduces project complexity and the time to deployment across industries.”
Intel claims that these reference kits, which cover industries ranging from consumer products to manufacturing, come with big performance benefits. For instance, the company said its conversational chatbot reference kit can speed up inferencing in batch mode by up to 45 percent using oneAPI optimizations. Another kit for visual quality control inspections in life sciences environments can speed up training by up to 20 percent and inferencing by 55 percent with oneAPI.
Intel Still Faces Major Challenges In Catching Up With Nvidia
Lin said Intel is making the right move in bundling together various software components to accelerate AI development because gaining share as a developer platform is not won or lost in hardware but in “how you cultivate open ecosystems and get people to build with your tooling.”
“The more developers you can get who really want to build in your ecosystem, the more it cascades down from a monetization and a product standpoint years down the line,” he said.
However, he said, Intel is still years, maybe even a decade behind Nvidia’s software capabilities that has allowed the rival chip designer to reap the rewards of the AI computing boom.
“Nvidia’s operating like they’re behind, but they’re actually 10 years ahead of everyone else. So it’s hard to catch up with someone who doesn’t take that position for granted,” Lin said.
Even though Nvidia’s CUDA platform is proprietary, preventing developers from using the company’s growing software stack with competing chips, the GPU giant has made the right parts of its software stack open in other ways while also introducing a wide range of performance and cost options for GPUs, according to the Mark III Systems executive.
Lin said Nvidia’s exclusive focus on its own GPUs has helped the company ensure that that its software works well by limiting the amount of variability on the hardware side. At the same time, the chip designer has long supported open-source machine learning frameworks like PyTorch and TensorFlow on top of developing open software components higher up in the stack such as Nvidia Modulus.
“You want to be open on certain points in the stack to get the developers and data scientists to build, but then you also want to make sure it works. And I think they’ve just found a really good balance that makes sense on that,” he said.