Analysis: How Two Big Decisions Helped AMD Win The OpenAI Deal
While AMD’s decisions to accelerate its GPU road map and acquire ZT Systems played critical roles in helping it win the big OpenAI deal, AMD CEO Lisa Su indicated that these moves have also accelerated its broader efforts to challenge Nvidia’s AI dominance.
When AMD announced its blockbuster deal with OpenAI Monday morning, the agreement validated two critical decisions the chip designer made over the past two years: accelerating its GPU road map and acquiring ZT Systems.
With Nvidia’s continued dominance of the AI infrastructure market allowing the rival to generate multiples of AMD’s revenue, time has not been on the company’s side, so it has had to make bold moves in a bid to mount a serious challenge to Nvidia.
[Related: Analysis: After Big Nvidia Win, Will Intel Ever Escape Its Rival’s Shadow?]
One of those decisions revolved around the pace at which AMD introduces new data center GPUs. While the company had previously introduced a new Instinct GPU roughly every two years, it announced in June of last year that it was moving to a one-year release cadence.
At the time, Forrest Norrod, AMD’s top data center executive, told CRN that it had “no choice” but to “dramatically” increase its investments in AI and release GPUs at a faster cadence in the face of unrelenting generative AI innovation and Nvidia’s aggressive strategy.
He said the move was partly in response to the announcement in late 2023 that Nvidia would release new data center GPUs every year instead of every two years. Norrod believed the rival’s move was triggered by the growing viability of AMD’s Instinct GPUs.
“Nvidia, quite candidly, stepped on their accelerator pedal, and when they saw that— ‘holy crap. AMD has got a real part; they’re going to be a real competitor’—they very deliberately stepped on the accelerator trying to block us and everybody else out. And so we’re responding to that as well,” he said.
This accelerated road map resulted in AMD releasing the Instinct MI325X in last year’s fourth quarter, followed by the MI350X series this past summer.
The company is now building up to its biggest moment yet in the AI computing space as a result of this sped-up cadence: next year’s launch of the MI400 series.
At its Advancing AI event in June, AMD CEO Lisa Su said the MI400 series was “built from the grounds up for leadership” in both large-scale training and distributed inference, and she revealed that OpenAI is a “very early design partner” for the GPU platform.
To underline OpenAI’s interest, Su brought out on stage the AI software giant’s CEO, Sam Altman, who said he is “extremely excited for the MI450,” referring the series’ flagship GPU.
“When you first started telling me what you’re thinking about for the specs, I was like, there’s no way. That just sounds totally crazy. It’s too big. But it’s really been so exciting to see you all get close to delivery on this. I think it’s going to be an amazing thing,” he added.
Now AMD is planning to deploy one gigawatt of MI450 infrastructure for OpenAI beginning in the second half of 2026 as part of a project that is six gigawatts total. Su said this deal is worth tens of billions of dollars, which will allow AMD to achieve its data center AI revenue goal by 2027 and potentially prompt other substantial Instinct deals.
What makes the MI450 special is the fact that it’s the first Instinct GPU AMD is using to build rack-scale server platforms, which are designed to enable high-speed connections between dozens of GPUs to make the server rack act as a single supercomputer for the most computationally demanding AI workloads.
AMD revealed its first rack-scale platform at the Advancing AI event in June, more than a year after Nvidia introduced its first product in the category, the Blackwell-based GB200 NVL72. These platforms now serve as the flagship vehicle for Nvidia’s most powerful GPUs, enabling the fastest possible AI performance within a rack.
For AMD’s OpenAI project, these rack-scale solutions will serve a critical role in enabling the fastest possible AI computing it can provide.
The chip designer likely wouldn’t have been able to deliver these rack-scale platforms starting in late 2026 for OpenAI if it wasn’t for the company’s decision to acquire server designer and manufacturer ZT Systems last year.
When AMD announced in August of last year that it reached a deal to buy ZT Systems for $4.9 billion, Su explicitly stated that the acquisition would give it “world-class systems design and rack-scale solutions expertise” to “significantly strengthen our data center AI systems and customer enablement capabilities.”
(The company’s plans with ZT Systems didn’t include its server manufacturing unit, for which AMD reached an agreement this past May to sell to U.S. electronics manufacturing services giant Sanmina for $3 billion.)
While the ZT Systems acquisition closed in March, Norrod reportedly said roughly three months before that AMD was already working closely with ZT Systems’ engineering team on “forward-looking products” based on upcoming Instinct GPUs. This included MI400 series products, for which ZT Systems was set to make a “major contribution.”
At the Advancing AI event in June, Norrod emphasized in a roundtable with CRN and other news outlets that AMD is using the ZT Systems acquisition to design rack-scale solutions “specifically for the very lead customers [who are going to] deploy a crazy high volume.” While Norrod didn’t name any of those lead customers at the time, it should be clear now that OpenAI is one of them with the deal AMD announced this week.
While AMD’s decisions to accelerate its GPU road map and acquire ZT Systems played critical roles in helping it win the big OpenAI deal, comments by Su in a Monday webcast indicated that these moves have also accelerated its broader efforts to challenge Nvidia’s dominance of the AI infrastructure market.
In addition to the tens of billions of dollars AMD expects to make from the OpenAI deal in the coming years, Su predicted that the big customer win will have a “compounding effect” that could result in the company making “well over $100 billion in revenue over the next few years” from other customers deploying Instinct infrastructure.
With Su expecting this growing customer interest to put AMD on a “clear trajectory to capture a significant share of the global AI infrastructure build out,” the CEO has shown how big, bold decisions can change the competitive dynamics of the semiconductor industry in a relatively short amount of time.