CES 2026: 8 Big Chip Announcements By Intel, Nvidia, AMD And Qualcomm

The chip news wasn’t just limited to PCs and other consumer-facing products, demonstrated by Nvidia’s focus on data center-related announcements and AMD CEO Lisa Su kicking off her keynote by talking about rival products in the AI infrastructure market.

The world’s leading chip companies came to CES 2026 this week with some major announcements that will give channel partners a variety of new and powerful devices to sell, but the news wasn’t just limited to PCs and other consumer-facing products.

In a show of how central AI infrastructure spending has become to the economy and tech industry, Nvidia mainly came to CES this year with data center-related announcements, namely details of its forthcoming Vera Rubin AI platform. It was a major break from the AI infrastructure giant’s previous CES appearances that mainly focused on PC-related news.

[Related: CES 2026: HP Launches An AI PC In A Keyboard With EliteBoard G1a]

AMD, too, brought the AI data center hype to the show, historically for showing off consumer tech, with its CEO, Lisa Su (pictured), starting her keynote talking up the company’s forthcoming “Helios” AI server rack and the associated Instinct MI400 GPU series.

But there were still plenty of announcements to keep PC sellers satiated, with AMD coming out with new Ryzen AI processors for laptops and other small PC form factors. Intel, still finding its way out of the many challenges that has beset the semiconductor giant over the years, came out swinging against rivals with its latest Core Ultra processors. Qualcomm also brought some PC goodness to CES with mid-range Snapdragon X2 processors.

There were announcements, too, for partners who sell IoT and embedded devices, whether for consumer, commercial or industrial purposes. These announcements came from AMD with its new Ryzen AI Embedded series, Qualcomm with its new Dragonwing Q-series processors and Intel, which plans to release edge versions of the new Core Ultra chips.

What follows are the most important details from the chip announcements Intel, Nvidia, AMD and Qualcomm made during this week’s CES event.

Intel Core Ultra Series 3

Intel marked the launch of its Core Ultra Series 3 processors for laptops and other PC form factors, saying the chips can outperform rival products across a variety of measures.

The processors, known under the code name “Panther Lake,” are set to power more than 200 PC designs from OEMs with the first systems set to arrive in late January and others following in the coming months, according to the chipmaker. The company also plans to release models for edge computing, saying that they have been certified for embedded and industrial use cases.

The Core Ultra Series 3 chips are the first to use the semiconductor giant’s Intel 18A manufacturing process. It was the last node in former Intel CEO Pat Gelsinger’s comeback plan, and his successor, Lip-Bu Tan, has said that the Panther Lake will play a crucial role in helping the company win over customers for its Intel Foundry business.

Featuring a new class of models with the Core Ultra X9 and Core Ultra X7 names, the processors feature up to 16 cores made on of new P-core, E-core and low-power E-core architectures; a 5.1 GHz P-core turbo frequency; 12 GPU cores based on Intel’s Xe3 architecture; and memory speed of 9,600 megatransfers per second.

All processors feature an NPU capable of 50 trillion operations per second (TOPS).

Compared to the Core Ultra Series 2 “Lunar Lake” processors, the new chips provide up to 60 percent faster multi-threaded performance, 77 percent better graphics performance and two times faster AI performance, according to Intel. The processors can also enable up to 27 hours of battery life for laptops.

The chipmaker said these improvements allow the Core Ultra Series 3 chips to outperform comparable offerings from AMD Ryzen AI 300 series and Qualcomm’s Snapdragon X Series in application areas like productivity, gaming and AI.

For instance, Intel claimed that the GPU, combined with its new XeSS3 super-sampling and multi-frame generation technologies, can deliver up to four times smoother gameplay than AMD’s rival graphics technologies in the Ryzen AI 300 series.

The company also said that the Core Ultra Series 3 processors are more efficient and can deliver better battery life, though not always in the latter case.

Compared to a top Ryzen AI 300 chip, Intel’s chips use up to 78 percent less power for a one-on-one Zoom call and up to 48 percent less power for web browsing on Microsoft Edge, according to the semiconductor giant.

The company said a top Core Ultra Series 3 processor can support a battery life of up to 16.5 hours for streaming Netflix. In contrast, laptops with comparable chips from AMD and Qualcomm can last for 12.6 hours and 17.7 hours, respectively, according to Intel.

But when it comes with battery life when running a Microsoft Teams video call with nine participants, Intel said its top chip shines above the competing AMD and Qualcomm products, lasting 7.5 hours versus AMD’s 7.3 hours and Qualcomm’s 6.9 hours.

AMD Ryzen AI 400 Series

AMD announced a new line-up of Ryzen AI processors to go up against Intel’s latest in thin-and-light laptops—and showed how it’s fighting Nvidia in a new arena.

The Santa Clara, Calif.-based chip designer revealed that its forthcoming Ryzen AI 400 series will push the maximum CPU frequency to 5.2GHz and NPU performance to 60 trillion operations per second (TOPS) for laptops compatible with Microsoft’s Copilot+ PC program. In contrast, the Ryzen AI 300 chips maxed out at 5.1GHz and 55 TOPS.

The company also teased that it plans to release socketed Ryzen AI 400 models for desktop PCs, which would mark a first for AMD’s AI PC chip brand.

The Ryzen AI 400 processors are expected to debut in laptops and other small form factor devices from Dell Technologies, HP Inc., Lenovo and other OEMs by March.

AMD is pitching the new wave of Ryzen AI chips for those who want devices with the best CPU, GPU and NPU performance in combination with multi-day battery life and “leading AI performance and experiences.”

The Ryzen AI 400 processors also boost graphics frequency to as much as 3.1 GHz and memory speed to a maximum 8,533 megatransfers per second (MT/s), up from the 2.9 GHz graphics frequency and 8,000 MT/s memory speed of the previous generation.

The specs that aren’t changing from the Ryzen AI 300 series are the maximum 16 cores and 32 threads based on AMD’s Zen 5 architecture as well as the maximum 16 GPU cores based on the RDNA 3.5 architecture.

Compared to Intel’s 30-watt, 8-Core Ultra 9 288V, AMD claimed that its 28-watt, 16-core Ryzen AI 9 HX 470 is 30 percent faster for multitasking, 70 percent faster for content creation, 10 percent faster for gaming and 70 percent faster on the Cinebench 2024 nT bench while running on the laptop’s battery in balanced power mode.

AMD Ryzen AI Max Series

AMD expanded its lineup of Ryzen AI Max processors that debuted a year ago and came out with new claims about how they compare to the chips inside Nvidia’s DGX Spark mini workstation and Apple’s M5-based MacBook Pro.

These chips are larger and more powerful than the standard Ryzen AI processors, mainly thanks to the maximum of 40 GPU cores based on the RDNA 3.5 architecture, which helps make the integrated GPU capable of performing up to 60 teraflops. The top chip maxes out to 16 Zen 5 cores, 32 threads and a 5.1 GHz boost frequency. All others reach 5.0GHz, and the NPU performance is 50 trillion operations per second for every model.

A key feature of the Ryzen AI Max series is how the processors can allocate up to 128 GB of system memory for the large, integrated GPU, which can make them a good fit for running large AI models and other heavy workloads.

With the expanded lineup, AMD has added 8- and 12-core models that reach 40 GPU cores and 60 teraflops, going beyond the 32 GPU cores and 40 teraflops in the existing 8- and 12-core products that came out last year.

In new claims, AMD compared the HP Z2 Mini G1a mini workstation, powered by its flagship Ryzen AI Max+ 395 Pro and equipped with 128 GB of system memory to Nvidia’s DGX Spark, powered by the GB10 system-on-chip with the same amount of memory. Nvidia released DGX Spark last year as a new class of workstation PC for AI developers.

The company said the HP device provides 50 percent more tokens per second per dollar for OpenAI’s 20-billion-parameter GPT-OSS large language model and 70 percent more tokens per second per dollar for the 120-billion-parameter version of the same model.

Taking aim at Apple, the company said the Asus ROG Flow Z13 laptop, powered by the same Ryzen AI Max+ 395 Pro, can outperform a MacBook Pro based on Apple’s latest M5 system-on-chip by 40 percent in AI inference, 80 percent in multitasking, 80 percent in content creation and 60 percent in gaming, based on benchmarks it ran internally.

Qualcomm Snapdragon X2 Plus

Qualcomm said that its forthcoming Snapdragon X2 Plus processors for Windows 11 PCs will bring major performance boosts over the previous generation, pitching the chips for “modern professionals” who want a “fast, responsive and portable device.”

Announced at CES 2026, the Snapdragon X2 Plus processors are expected to land in select devices from leading OEMs by June and represent the middle of the pack for the Snapdragon X2 Series the chip designer revealed last September. Qualcomm is also targeting “aspiring creators” and “everyday users” with the new chips.

The product line, designed to power PCs using Microsoft’s Copilot+ PC brand, represents Qualcomm’s revitalized push to take CPU market share away from Intel and AMD—and create more competition for Apple’s Mac computers.

Compared to the Snapdragon X Plus chips that debuted in 2024, the new mid-range processors improve CPU single-core performance by up to 35 percent, CPU multi-core performance by up to 17 percent, GPU performance by up to 29 percent and NPU performance by up to 78 percent, according to the company.

The Snapdragon X2 Plus segment consists of two processors, with one sporting up to 10 of Qualcomm’s third-generation Oryon cores along with a 1.7GHz GPU frequency and the other featuring six cores along with a 900MHz GPU frequency. The GPU is based on the company’s next-generation Adreno graphics architecture.

While Qualcomm did not make any competitive comparisons in a pre-brief with journalists last month, the company said the chips—like the flagship Snapdragon X2 Elite products—feature the “fastest NPU in a laptop,” capable of 80 trillion operations per second.

Both Snapdragon X2 Plus processors offer a maximum multi-threaded frequency of 4.0GHz and a transfer rate of 9,523 megatransfers per second with the LPDDR5x memory type.

Like the Snapdragon X2 Elite and Snapdragon X2 Elite Extreme processors, the mid-range chips will feature an option for Qualcomm’s new Snapdragon Guardian out-of-band PC management, which is the company’s answer to Intel’s vPro platform.

Nvidia Vera Rubin Platform

Nvidia revealed a new “context memory” storage platform, “zero downtime” maintenance capabilities, rack-scale confidential computing and other new features for its forthcoming Vera Rubin NVL72 server rack for AI data centers.

The AI infrastructure giant used the CES 2026 keynote by Nvidia CEO Jensen Huang (pictured) to mark the launch of its Rubin GPU platform, the highly anticipated follow-up to its fast-selling Blackwell Ultra products. But while the company said Rubin is in “full production,” related products won’t be available from partners until the second half of this year.

In promoting Rubin, Nvidia touted support from a wide range of support from large and influential tech companies, including Amazon Web Services, Microsoft, Google Cloud, CoreWeave, Cisco, Dell Technologies, HPE, Lenovo and many more.

The Santa Clara, Calif.-based company plans to initially make Rubin available in two ways: through the Vera Rubin NVL72 rack-scale platform, which connects 72 Rubin GPUs and 36 of its custom, Arm-compatible Vera CPUs, and through the HGX Rubin NVL8 platform, which connects eight Rubin GPUs for servers running on x86-based CPUs.

Both of these platforms will be supported by Nvidia’s DGX SuperPod clusters.

Dion Harris, a senior director at Nvidia, said the Rubin platform, with the Vera Rubin NVL72 rack as its flagship product, consists of the Rubin GPU, the Vera CPU—Nvidia’s first CPU with custom, Arm-compatible cores—and four other new chips the company has co-designed to “meet the needs of the most advanced models and drive down the cost of intelligence.”

Each Vera CPU features 88 custom Olympus cores, 176 threads with Nvidia’s new spatial multi-threading technology, 1.5 TB of system LPDDR5x memory, 1.2 TBps of memory bandwidth and confidential computing capabilities. It also features a 1.8 TBps NVLInk chip-to-chip interconnect to support coherent memory with the GPUs.

The Rubin GPU, on the other hand, is capable of 50 petaflops for inference computing using Nvidia’s NVFP4 data format, which is five times faster than Blackwell, the company said. It can also perform 35 petaflops for NVFP4 training, which is 3.5 times faster than its predecessor. The bandwidth for its HBM4 high-bandwidth memory is 22 TBps, 2.8 times faster, while the NVLink bandwidth per GPU is 3.6 TBps, two times faster.

The platform also includes the liquid-cooled NVLink 6 Switch for scale-up networking. This switch features 400G SerDes, 3.6 TBps of per-GPU bandwidth for communication between all GPUs, a total bandwidth of 28.8 TBps and 14.4 teraflops of FP8 in-network computing.

In addition, the Rubin platform makes use of Nvidia’s ConnectX-9 SuperNIC and BlueField-4 DPU to take scale-out networking to the next level, according to the company.

All of these parts go into the Vera Rubin NVL72 platform, which is capable of 3.6 exaflops of NVFP4 inference performance, five times greater than the Blackwell-based iteration, Nvidia said. Training performance with the NVFP4 format reaches a purported 2.5 exaflops, which is 3.5 times higher than the predecessor.

AMD Ryzen AI Embedded Series

AMD revealed a new family of Ryzen AI Embedded processors to chase after edge AI use cases ranging from humanoid robotics to automotive digital cockpits.

Designed for the “most constrained embedded systems,” these processors combine high-performance Zen 5 CPU cores to power deterministic control systems, RDNA 3.5 GPU cores to fuel real-time visualization and graphics as well as an XDNA 2 NPU for low-latency, low-power AI workloads, according to AMD.

The chips are divided into two segments based on the use case: the P100 series processors for in-vehicle experiences and industrial automation, and the X100 series for computationally demanding physical AI and autonomous systems.

The company said it has started sampling P100 chips with customers, and it expects to do so for the X100 processors with customers in the first half of this year.

Coming in a ball-grid array (BGA) form factor, the P100 processors feature 4-6 cores, up to a 4.5 GHz maximum frequency and an NPU capable of up to 50 trillion operations per second—all within a 15-54-watt thermal envelope. With a 10-year lifecycle, these chips can withstand extreme temperatures ranging from -40 degrees to 221 degrees Fahrenheit.

The processors are supported by a unified software stack that allows developers to program for the CPU, GPU and NPU. Built on the open-source, Xen hypervisor-based virtualization framework, the software stack allows for the secure isolation of multiple operating system domains, allowing for the parallel operation of Yocto or Ubuntu for running the human-machine interface, FreeRTOS for managing real-time control, and Android or Windows for running more advanced applications.

AMD Instinct MI400 Series

AMD provided an early look at its upcoming Instinct MI400 series GPUs that will power its first rack-scale AI platform—and announced a new version aimed at enterprises.

The new enterprise model, the MI440X, is designed for training, fine-tuning and inference workloads in a compact, eight-GPU server design for on-premises data centers. According to AMD, the serve will integrate “seamlessly into existing infrastructure.”

The GPU adds to the existing MI455X that AMD is using for its “Helios” server rack, which is expected to go toe-to-toe against Nvidia’s much-hyped Vera Rubin NVL72 platform, and the MI430X, which is designed for high-performance computing and sovereign AI workloads.

During AMD CEO Lisa Su’s keynote on Monday, she revealed that the MI455X has 320 billion transistors, which is 70 percent more than its current flagship GPU, the MI355X. The MI455X features 12 compute and I/O chiplets made using 2-nanometer and 3-nanometer manufacturing processors as well as 432GB of HBM4 high-bandwidth memory, all of which are connected using 3D chip stacking technology.

A single Helios rack will consist of 72 MI455X GPUs and 18 EPYC “Venice” CPUs as well as AMD’s 800-GbE Pensando “Vulcano” and “Salina” networking chips, which will enable tens of thousands of Helios racks to connect across a data center.

“I'm happy to say Helios is exactly on track to launch later this year. We expect it will set the new benchmark for AI performance,” Su said.

Qualcomm Dragonwing Q-Series

Qualcomm revealed new Dragonwing Q-series processors as part of a bigger IoT push that it said now includes services and developer offerings to serve a wide range of customers.

The company called the Dragonwing Q-8750 its “most advanced IoT processor to date,” designed for “high-performance edge computing and immersive experiences.” It features an AI engine capable of 77 trillion operations per second with support for 4-, 8- and 16-bit integer formats as well as the 16-bit floating point format. It can also support up to 12 physical cameras with 18 logical camera streams and three 48-megapixel image signal processors for use cases ranging from drones to multi-angle vision systems.

The Dragonwing Q-7790, on the other hand, is designed to bring a “new level of intelligence and responsiveness to consumer and industrial IoT devices.” The chip comes with a 24 TOPS AI engine and features display support and encoding for 4K video running at 60 frames per second as well as decoding for 4K video running at 120 frames per second. Security features include Total Management Engine, Secure Boot and Qualcomm Trusted Execution Environment.

Qualcomm said it is revealing these processors as part of a new portfolio of comprehensive solutions for “rapid prototyping, scalable deployment and superior AI integration at the edge.” These solutions include the Qualcomm Insight Platform for AI-powered video intelligence, Qualcomm Terrestrial Positioning Services for precise positioning capabilities and the Edge Impulse platform for running inference and training workloads.