Intel 10th-Gen Core CPUs Bring Big AI Boost To Ultra-Thin Laptops

Intel's newly revealed 10th-generation Core mobile processors 'are really the next wave of how we think about performance, architecture and design for the mobile go-getter for that ultra-thin and light system,' an Intel executive says of the chipmaker's big 10-nanometer CPU launch at Computex 2019.

ARTICLE TITLE HERE

Intel is bringing big artificial intelligence performance gains and new features to power-efficient, ultra-thin laptops with its newly announced 10th-generation Core mobile processors.

Based on Intel's Lake client microarchitecture, the processors — which the chipmaker announced at the annual Computex conference in Taiwan on Tuesday — are the first to use the company's next-generation 10-nanometer process technology for a wide release.

[Related: How Intel Helps Partners Use AI, Vision Solutions With Regular CPUs]

id
unit-1659132512259
type
Sponsored post

At a press event in mid-May, Chris Walker, Intel's vice president and general manager Mobility Client Platforms, said Intel's new 10th-generation Core processors — which consist of the Core i3, the Core i5 and the Core i7 — will ship in more than 30 designs for notebooks, 2-in-1s and other mobile PC form factors by holiday 2019, including Dell's updated XPS 13 2-in-1.

The new CPUs, Walker said, "are really the next wave of how we think about performance, architecture and design for the mobile go-getter for that ultra-thin and light system."

"Mobile go-getter" is the term the Santa Clara, Calif.-based company is using to describe the target audience for its new Project Athena program, which aims to establish a new standard for high-performance, ultra-thin notebooks that use 10th-generation Core processors.

Intel's initial 10th-generation Core mobile processors do not feature the same high core counts and clock frequencies as its ninth-generation Core mobile processors, which feature up to eight cores and 5 GHz. Instead, these new CPUs, which go up to four cores and 4.1 GHz, are designed for a combination of high performance and long battery life in small and flexible form factors.

Among the most significant new features are fresh AI capabilities that can not only accelerate the performance of AI-based applications but dynamically tune power consumption against performance needs. The latter is made possible by what Intel calls Dynamic Tuning 2.0, which uses machine learning to predict workloads and maximize their performance accordingly.

"That allows us to look at how the specific OEM system design interacts with our CPU to make sure that you get the most of our dynamic frequency, have longer residency in our turbo states, [and get] things like understanding [if] I'm in a tablet mode or I'm in 2-in-1 mode," Walker said.

As for accelerating AI applications, one enabling technology is Deep Learning Boost, or DL Boost, a new set of hardware-level instructions first introduced in Intel's second-generation Xeon Scalable server CPUs that accelerates inference-based workloads by 2.5 times over Intel's eight-generation Core mobile processors.

Intel demonstrated the impact of DL Boost and its applicability to everyday applications in a few ways. One demonstration showed that Microsoft's Photos application could perform much faster inference-based searches for objects and settings found within images. Another showed a photo editing program removing the blur from an image within a few seconds, much faster than a laptop without DL Boost.

"AI is here in client today, and Ice Lake is allowing us to continue to build on that foundation of AI with these new DL Boost capabilities," Becky Loop, Intel's chief client architect, said.

But DL Boost, which is designed for low-latency, burst workloads, isn't the only way the new processors can accelerate AI workloads. The processor's new Gen11 integrated graphics — which come with up to 64 execution units, a new high for an on-board GPU — can support high-throughput, sustained AI workloads on top of applications for 1080p gaming and 4K video editing and playback. This capability was demonstrated in a video editing program that was able to apply a compute-heavy artistic filter to a video much faster than a laptop running without Gen11 graphics.

Intel's third-way of accelerating AI workloads is through a new component in the processor called the Gaussian Neural Accelerator, which is designed for low-power AI applications. Ronak Singhal, Intel's director of CPU computing architecture, said this can be useful, for example, in an application that is automatically transcribing the audio of a live meeting without drawing much power. It can also be used to filter out the background noise in a video chat in a power-efficient manner.

"We offer different solutions instead of trying to have one solution try to solve every single problem, whether it's low latency, high throughput or low power," Singhal said.

The big question facing Intel now is whether independent software vendors will support the chipmaker's new AI capabilities. Singhal said demand for such applications is growing, which is why Intel is ensuring the CPUs are supported by key AI frameworks, such as Windows ML and Apple Core ML.

"I was talking with one of our software experts over the past few weeks about what we're doing with AI, and his comment was we're seeing this as the biggest shift from client ISVs in more than a decade: the need and the desire to integrate inference into their application, whatever their application may be," he said.