Search
Homepage This page's url is: -crn- Rankings and Research Companies Channelcast Marketing Matters CRNtv Events WOTC Jobs HPE Discover 2019 News Cisco Partner Summit 2019 News Cisco Wi-Fi 6 Newsroom Dell Technologies Newsroom Hitachi Vantara Newsroom HP Reinvent Newsroom IBM Newsroom Ingram Micro ONE 2019 News Juniper NXTWORK 2019 News Lenovo Newsroom Lexmark Newsroom NetApp Insight 2019 News Cisco Live Newsroom HPE Zone Intel Tech Provider Zone

Intel AI Exec Gadi Singer: Partners 'Crucial' To Nervana NNP-I's Success

'As we are working with [cloud service providers], we are also bringing it to a point where it will be available for larger scale, and this is where our partners are crucial for success,' Intel's Gadi Singer says of the chipmaker's new chip for deep learning inference workloads.

1   2   3   ... 6 Next

New Chip Has Higher Compute Density Than Nvidia's T4 GPU

Intel artificial intelligence executive Gadi Singer said the chipmaker's channel partners will play a "crucial" role in the success of the new Nervana NNP-I chips for deep learning inference workloads.

"The first wave of engagements that we have are with some of the large cloud service providers, because they are very advanced users of that and they use it by scale," Singer, a 36-year company veteran, told CRN in an interview. "But as we are working with them, we are also bringing it to a point where [...] it will be available for larger scale, and this is where our partners are crucial for success."

[Related: Nvidia's Jetson Xavier NX Is 'World's Smallest Supercomputer' For AI ]

As vice president of Intel's Artificial Intelligence Products Group and the newly formed Inference Products Group, Singer has played a central role in development of the NNP-I1000, which was revealed alongside the Nervana NNP-T1000 chip for deep learning training at the Intel AI Summit on Tuesday.

Inference is a critical part of deep learning applications, taking neural networks that have been trained on large data sets and bringing them into the real world for on-the-fly decision making. This has created an opening for Intel and other companies to create specialized deep learning chips — a market that is forecast to reach $66.3 billion in value by 2025, according to research firm Tractica.

From a competitive standpoint, the chipmaker said a 1U server rack containing 32 of its NNP-I chips provide nearly four times the compute density of a 4U rack with 20 Nvidia T4 inference GPUs. In a live demo at the summit, the NNP-I rack was processing 74,432 images per second per rack unit while the Nvidia T4 rack was processing 20,255 images per second per rack unit.

"The fact that it's high power efficiency allows us to reach density and very good total cost of ownership," Singer said.

What follows is an edited transcript of CRN's interview with Singer, who talked about the NNP-I's target use cases, how Intel plans to accelerate adoption, what pain points it plans to address for businesses running deep learning workloads and how it differs from Xeon's deep learning capabilities.

 
 
1   2   3   ... 6 Next

sponsored resources