5 Cutting-Edge Technologies We Saw At Intel Innovation 2023

Here’s a closer look at five cutting-edge Intel technologies CRN spotted at the third annual Intel Innovation event, including AI-powered software that can turn physical objects into detailed 3-D models using smartphone video and digital twin software that can create 3-D representations of the real world.

While the keynotes at last week’s Intel Innovation event largely focused on new chips and software that will fuel the chipmaker’s expanding AI strategy, the conference was the launch pad for an even wider variety of cutting-edge technologies that could change the way people work and play.

Spread throughout Intel Innovation’s Technology Showcase, several technologies developed by Intel gave attendees a look at how generative AI can defend computer vision applications from patterns that cause them to glitch, AI-powered digitization software that can turn physical objects into detailed 3-D models using smartphone video, and GenAI-based Audacity plugins that can transform audio on a laptop within seconds.

[Related: 6 Big Announcements At Intel Innovation 2023: From 288-Core CPU To AI Supercomputer]

There was also digital twin software that can create 3-D representations of the real world using cameras and other sensors as well as a suite of software that can improve a laptop’s battery life by reducing the energy used by the display through a variety of methods.

What follows are five cutting-edge Intel technologies CRN spotted at Intel Innovation 2023.

Gen-AI Based Defense Against Attacks On Computer Vision Systems

Intel has developed a method using the Stable Diffusion text-to-image model to protect machine learning-based computer vision applications from so-called adversarial attacks.

Marius Arvinte, a research scientist at Intel, told CRN that systems relying on computer vision applications such as autonomous vehicles or satellites can be vulnerable to adversarial attacks, which can cause them to malfunction.

These adversarial attacks consist of patterns appearing in the system’s camera view with details that are imperceptible to the human eye, according to Arvinte. These imperceptible patterns can cause object detection models like YOLOv3 to improperly detect objects, humans or other living beings in view. It can also cause computer vision systems to not detect anything at all.

Arvinte said the Security and Privacy Research Lab, which is part of Intel Labs, devised a system to help organizations understand how adversarial attacks can impact computer vision applications.

From there, Arvinte’s team devised a method to protect computer vision applications from adversarial attacks by using Stable Diffusion to generate an image that covers the imperceptible patterns, which allows the computer vision system to behave normally again.

Neural Object Cloning

Neural Object Cloning is an AI-powered asset digitization software application developed by Intel that can create a detailed 3-D model based on a physical object using video from a smartphone.

Ruslan Rin, a senior software engineer at Intel, told CRN that the software, which is being developed by Intel Labs in partnership with the company’s Data Center and AI Group, can generate a 3-D representation of a real-life object in 15 to 20 minutes based on a video that captures most of an object’s angles.

Neural Object Cloning uses a neural network that Intel trained to understand how to craft a 3-D model, including its shape, texture and reflective features, from videos of a real object.

“We put a lot of efforts into reconstructing texture, like the glossiness and roughness of material,” Rin said.

The objects can be imported into content creation platforms such as Unreal Engine, whether it’s for the development of a video game or other 3-D applications.

Intel is expected to release a version of Neural Object Cloning by the end of the year.

Intel SceneScape

SceneScape is digital twin software developed by Intel that uses cameras and other kinds of sensors to create a real-time, 3-D digital representation of a physical area.

“SceneScape is trying to get beyond individual sensors and build real-time, 4-D digital twins of, ultimately, the world,” Rob Watts, the lead architect of SceneScape at Intel, told CRN.

Watts said “4-D” because SceneScape’s digital twin can show digital representations of humans and other objects move based on how they maneuver throughout the physical world. The software can also approximate the real location of humans and other objects using XYZ coordinates.

In addition, SceneScape can be used to create zones within an area and set up alerts if an unauthorized person or object enters that zone.

These capabilities can help organizations where situational awareness is critical, such as a construction zone or hospital, where looking at multiple camera views may not be sufficient.

“Instead of saying a person is in this pixel location in the camera, I’m saying this person is at this location in my factory in XYZ or [geospatial] coordinates, and they’re about to be run over by a forklift that’s coming around the corner at this location,” Watts said.

In another example, Watts proposed a situation where a car parked near a hospital’s emergency room but no one left the car, a scenario SceneScape could better understand than discrete cameras.

“Turns out, that’s a big problem,” Watts said.

Intel Intelligent Display

Intel Intelligent Display is a software suite that enables a variety of display features that can help save energy and, therefore, extend battery life for laptops.

Mike Bartz, a technical marketing employee at Intel, told CRN that the first feature is user-based refresh rate, which turns off or dims the laptop’s display if the user walks away.

“That saves energy over the lifespan of when you’re working throughout the day,” he said.

Another feature is called dynamic visual and power enhancements, which “intelligently” dims the screen’s backlight “based on what is being displayed on screen,” according to Bartz. This can be useful when browsing a website using dark mode or watching a video where there are dark scenes.

“You save milliwatts to watts based off of the content over time,” he said.

A third feature of Intel Intelligent Display is called autonomous low refresh rate, which allows a 120-Hz monitor’s refresh rate to drop down to match the frame rate of what’s on the screen.

“However, once the user interacts, such as moving the mouse, you have instantaneous 120-Hz smoothness. So by offering a lower refresh rate, you can save wattage,” Bartz said.

Bartz said Intel Intelligent Display “runs agnostic to the chipset platform,” but it will launch on laptops with Intel’s upcoming Core Ultra “Meteor Lake” processors this December.

“Intel intelligent Display, combined with the Meteor Lake platform, adds a one-two punch for even more power savings for the user experience,” he said.

OpenVINO-Based AI Plugins For Audacity

Intel has developed a handful of AI-powered plugins for the open-source Audacity audio editing applications to demonstrate how the upcoming Core Ultra processors can accelerate AI workloads.

The plugins take advantage of Intel’s OpenVINO toolkit to optimize the inference of AI models on the CPU, GPU and neural processing unit of the Core Ultra processors, according to Ryan Metcalfe, a deep learning research and development engineer at Intel.

One plugin, called Music Source Separation, uses Meta’s demucs v4 model and pipeline to separate song tracks into four stems consisting of drums, bass guitar, vocals and other instruments. Intel converted the demucs v4 pipeline from Python to C++ and used OpenVINO to optimize the model.

Another plugin generates music in different genres and styles using Riffusion, a Stable Diffusion model and pipeline that was fine-tuned to “generate snippets of audio from text prompts by generating spectrogram images,” which are “then converted to waveforms,” according to Metcalfe.

In a demonstration to CRN, the plugin generated a song in a matter of seconds on an Intel development laptop running on a Core Ultra processor.

A third plugin called Whisper Translation/Transcription, performs speech transcription or translation on an audio track using the whisper models from OpenAI and the whisper.cpp project. Intel contributed OpenVINO support to the models.

A fourth plugin, called Music Style Remix, creates an “altered version of an audio snippet” using a text prompt. This plugin also relies on Riffusion, except this time it “consumes input audio chunks to define the desired structure such as tempo and pitch,” according to Metcalfe.

“For example, at [Intel] Innovation I was showing how a ‘rock’ instrumental track can be remixed to have an ‘edm’ (electronic dance music) flavor using this feature,” he said in an email.

A fifth plugin uses models from the OpenVINO toolkit’s Open Model Zoo repository to remove “unwanted background noise from audio clips containing speech,” according to Metcalfe.