AMD: We’re Exploring A Discrete GPU Alternative For PCs
Rahul Tikoo, a top AMD PC executive, tells CRN that the chip designer is ‘talking to customers’ about ‘use cases’ and ‘potential opportunities’ for a dedicated accelerator chip that is not a GPU but could be a neural processing unit. ‘We can get there pretty quickly,’ he says.
A top AMD PC executive said the chip designer is exploring the potential for a discrete neural processing unit that could serve as an alternative to stand-alone GPUs in PCs.
Rahul Tikoo, the head of AMD’s client CPU business, confirmed that the Santa Clara, Calif.-based company is “talking to customers” about “use cases” and “potential opportunities” for a dedicated accelerator chip that is not a GPU but could be a neural processing unit (NPU) in response to a CRN question at a briefing held last month before AMD’s Advancing AI event.
[Related: AMD Exec: Dell Commercial PC Deal To Cover Broad Customer Base]
Tikoo made the comments as OEMs like Lenovo, Dell Technologies and HP Inc. start to explore discrete NPUs and other kinds of dedicated accelerator chips as alternatives to GPUs in PCs for AI workloads. Dell, for instance, last month announced that it would use an NPU-based Qualcomm AI 100 PC inference card inside a new Dell Pro Max Plus laptop.
“It’s a very new set of use cases, so we’re watching that space carefully, but we do have solutions if you want to get into that space—we will be able to,” said Tikoo, who returned to AMD last year as senior vice president and general manager of the client business unit after spending 12 years in leadership positions at Dell.
As for when AMD could introduce such a product, Tikoo (pictured above) said he can’t “talk about a future road map,” adding that it’s “under [a non-disclosure agreement].”
“But certainly if you look at the breadth of our technologies and solutions, it’s not hard to imagine we can get there pretty quickly,” he said.
The CTO of AMD systems integration partner Sterling Computers told CRN last week that he believes the way AMD is using the AI engine technology from its Xilinx acquisition to serve as the basis for an NPU component in Ryzen processors “opens up a broad path” for the company to introduce discrete products with faster NPU performance in the future.
“If this particular NPU tile creates 50 TOPS [trillion operations per second], tack on two of them, make it 100 [TOPS],” said the CTO, Christopher Cyr, whose North Sioux City, S.D.-based company was ranked No. 54 in CRN’s 2025 Solution Provider 500 list.
However, he added, a discrete NPU solution “would have to equate to something that would burn less energy than a stand-alone GPU.”
Cyr said he has been impressed with AMD’s efforts to enable AI software on its processors, including the NPU. As an example, he cited Gaia, an open-source project from AMD that is designed to run local large language models well on Ryzen-based Windows PCs.
“They’re making really good inroads towards leveraging that whole ecosystem,” he said.
Discrete NPUs Pop Up From Intel, Qualcomm And Startups
While GPUs have long been the default accelerator for a variety of demanding workloads, the introduction of the NPU has shaken the processing hierarchy, allowing PCs to run AI and machine learning workloads fast without requiring as much energy as a GPU.
The most popular implementation of the NPU so far in PCs has been to integrate the processor alongside a CPU and GPU on a system-on-chip, like the Intel Core Ultra 200 series, AMD Ryzen AI 300 series and Snapdragon X series chips.
However, there have been instances where a PC uses a discrete NPU component that is separate from the CPU or system-on-chip. For instance, before Intel introduced the NPU in its Core Ultra 100 series, Microsoft released a Surface Laptop in 2023 that used a discrete Intel Movidius visual processing unit, the predecessor to Intel’s NPU.
Then there’s the recently announced Dell Pro Max Plus laptop that uses Qualcomm AI 100 PC inference card, with Dell calling the device the “world’s first workstation with an enterprise-grade discrete NPU.” The inference card packs two Cloud AI 100 data center processors along with 64 GB of LPDDR4x memory, allowing it to run 450 trillion operations per second (TOPS) of 8-bit integer performance in a thermal envelope of up to 75 watts, according to Dell.
There are also efforts to introduce discrete NPUs from lesser-known companies, like Encharge AI. The Santa Clara, Calif.-based startup announced back in May a 200-TOPS NPU that can use as little as 8.25 watts in an M.2 form factor for laptops as well as a four-NPU PCIe card that can provide roughly 1,000 TOPs to provide what it called “GPU-level compute capacity at the fraction of the cost and power consumption.”