Homepage Rankings and Research Companies Channelcast Marketing Matters CRNtv Events WOTC Cisco Partner Summit Digital 2020 NetApp Digital Newsroom HPE Zone The Business Continuity Center Enterprise Tech Provider Masergy Zenith Partner Program Newsroom HP Reinvent Digital Newsroom Hitachi Vantara Digital Newsroom IBM Newsroom Juniper Newsroom Intel Partner Connect 2021 Avaya Newsroom Experiences That Matter The IoT Integrator Intel Tech Provider Zone NetApp Data Fabric WatchGuard Digital Newsroom

How Intel Helps Partners Use AI, Vision Solutions With Regular CPUs

'It's a massive deal because the install base is so big,' says Intel IoT ecosystem general manager Steen Graham in an interview about how channel partners can deploy computer vision solutions at the edge with regular CPUs using Intel's OpenVINO toolkit.

1   2   3   ... 7 Next

Using OpenVINO To Deploy AI At The Edge

For Intel channel partners looking to deploy computer vision and artificial intelligence solutions in edge devices, the semiconductor giant wants to make something clear: partners can build such applications on top of existing infrastructure with regular CPUs.

This is made possible by OpenVINO, a developer toolkit released by Intel a year ago that gives developers easy ways to opti­mize and deploy trained neural networks — systems designed to function like the human brain — on Intel's processor technologies for visual inference applications and beyond.

[Related: Intel's New IoT Sales Chief Talks 'Customer Obsession,' New Hardware]

Since its initial release, OpenVINO — which stands for Open Visual Inference and Neural Network Optimization — has been used by companies for a range of use cases. Philips Medical, for instance, is using OpenVINO to significantly improve the performance of its bone-age-prediction models. GridSmart, on the other hand, has been able to reduce wait times at traffic intersections with the toolkit.

A key capability for Intel's broader channel, however, is the ability to optimize neural networks on the company's general-purpose processors. That means GPUs or Intel's special-purpose processors, like Movidius visual processor units and FPGAs, aren't required, depending on the workload.

"It's a massive deal because the install base is so big," said Steen Graham, general manager of Intel's Internet of Things ecosystem and channels, in an interview with CRN. "Getting those workloads to run on one host system is going to reduce your total cost of ownership and increase your time to deployment of convolutional neural networks."

In one example, video surveillance provider AxxonSoft developed new AI deep learning analytics capabilities for new security solutions at several World Cup stadiums. Thanks to OpenVINO, these deep learning workloads were optimized for an 8X performance boost on existing Intel Core processors and an 8.3X performance boost on existing Intel Xeon processors.

"They had physical constraints on the infrastructure that didn't have additional capacity or space," Graham said, "but they wanted to add deep learning capability to their existing surveillance technologies to the reduce the number of false alerts and [add] capabilities like license plate detection."

What follows is an edited transcript based on an interview with Graham that dives into OpenVINO's capabilities, compelling use cases, how channel partners can take advantage of the toolkit and why other kinds of Intel products, like FPGAs, are better suited for certain kinds of workloads.

1   2   3   ... 7 Next

sponsored resources