Intel CEO: 'We've Got The Largest Collection Of Technology We Can Bring To AI'


Printer-friendly version Email this CRN article

Intel is making huge investments in artificial intelligence, and CEO Brian Krzanich thinks its breadth of artificial intelligence products puts it at the forefront of a technology that's still in its infancy.

In an interview with CNBC Monday, Kraznich was asked about competition with GPU manufacturer Nvidia when it comes to AI.

"When you look at the AI industry, you have to look at it in its infancy," Kraznich said in the interview. "Rather than a single solution like the GPU guys are bringing, we're bringing a wide swath of solutions. We've acquired Nervana for the highest-performing solutions; we've acquired Mobileye and Movidius for drones. So we have the largest collection of technology we can bring to AI … we've got a wide variety and AI is not just one solution."

Deep learning often requires the powerful and efficient parallel computing capabilities of GPUs, which Intel does not manufacture and Nvidia specializes in, to teach machines how to process text, voice and other types of data as part of artificial intelligence.

[Related: Chip Competition Intensifies As AMD, Intel Face Off With New Competing Workstation PC Platforms]

Nvidia sells a purpose-built system for deep learning, the Nvidia DGX-1. This software stack includes deep learning frameworks, including the company’s GPU training system, drivers and deep learning SDKs. Nvidia did not respond to a request for comment by publication.

However, instead of manufacturing GPUs, Intel has been working up to build out its own AI portfolio, punctuated by acquiring AI startup Nervana and developing the company's technology into an application-specific integrated circuit called Nervana Engine.

Intel has said that the company aims to deliver up to a 100-times reduction in the time to train a deep learning model over the next three years, compared to GPU solutions. Intel also expects the next generation of Intel Xeon Phi processors to deliver up to four times better performance than the previous generation of processors for deep learning.

The Santa Clara, Calif.-based company has also been leveraging its partners to better understand various AI applications. Intel Tuesday said it would team up with Tata Consultancy Services to create an Artificial Intelligence Center of Excellence to help easily develop AI solutions. This center will help developers, academics and startups develop their solutions from proof of concept to implementation.

On the other end of the spectrum, Intel has integrated Altera's field programmable gate arrays into its server processors, which also have specific function accelerators, including inference engines for deep learning; as well as Mobileye's technology, with plans to build a fleet of L4 autonomous test cars in the next year.

Autonomous driving is a big application for AI, and Krzanich said Intel "is poised to accelerate its autonomous driving business from car-to-cloud" – particularly through plans to integrate with Alphabet's self-driving unit, Waymo, to build fully self-driving cars.

"We've had a deep relationship with Waymo for a long time. What we're doing is bringing our silicon leadership, our Xeon core, our Intel architecture x86, together with their IP around sensor fusion and how the car sees the world … so this is a great way of two companies working together. They're bringing the AI, the mapping, the sensor fusion, and we're bringing the silicon technology together with them," said Krzanich in the interview.

Printer-friendly version Email this CRN article