AMD Hires AWS Executive As Lead Engineer For ‘Helios’ AI Server Rack
Amazon Web Services infrastructure leader Arvind Balakumar says he was hired by AMD to lead ‘cluster-scale AI infrastructure solutions’ for the Helios server rack platform, which is the chip designer’s answer to Nvidia’s popular and powerful rack-scale AI solutions.
AMD has hired Amazon Web Services infrastructure leader Arvind Balakumar to head up engineering efforts for the “Helios” AI server rack platform, the chip designer’s answer to Nvidia’s popular and powerful rack-scale AI solutions.
In a Sunday LinkedIn post, Balakumar said he was hired by AMD in November as corporate vice president of engineering for AI infrastructure—a job that puts him in charge of “cluster-scale AI infrastructure solutions” for Helios, which is set to debut next year as the company’s first rack-scale platform for its Instinct GPUs.
[Related: Analysis: How Two Big Decisions Helped AMD Win The OpenAI Deal]
An AWS spokesperson confirmed to CRN that Balakumar left the company. An AMD spokesperson did not respond.
“The demand for AI infrastructure is insatiable, and meeting that demand requires breakthrough innovation across the entire compute stack from silicon architecture and high-performance interconnects all the way to power delivery and grid modernization,” Balakumar wrote on LinkedIn.
The executive said AMD “is uniquely positioned to lead this era,” thanks to its “broad compute portfolio” of EPYC CPUs, Instinct GPUs and Pensando DPUs that will be tightly integrated into the Helios platform and enabled by its ROCm software stack.
Helios Is Playing A Big Role In Boosting AMD’s Instinct Demand
AMD’s significant investments to take on Nvidia’s dominance in the AI infrastructure market has been leading up to the release of Helios, which AMD plans to position against Nvidia’s Vera Rubin platform for the most powerful AI data centers.
AMD CEO Lisa Su has indicated that Helios is playing a substantial role in drumming up interest for the company’s AI infrastructure offerings, saying last month that it’s expected to make tens of billions of dollars in annual revenue from its Instinct GPUs and related products in 2027.
This trajectory has given Su the confidence to say that she sees a “very clear path” to gaining double-digit share in AI infrastructure market dominated by Nvidia, which has forecasted that it will make $500 billion from its Blackwell and Rubin GPU platforms between this and next year.
Balakumar Most Recently Led AWS Global Infrastructure
Balakumar spent the last five-and-a-half years at AWS, where he was most recently general manager of infrastructure scalability. In this role, he “led the strategic vision and execution for AWS’ global infrastructure scalability across compute, networking and data centers, spanning 120 Availability Zones across 38 regions,’ his LinkedIn profile said.
He also “oversaw the engineering, product management, and data analytics organizations driving infrastructure capacity planning, capacity delivery, GenAI infrastructure expansion, and cloud supply chain intelligence,” according to his LinkedIn profile.
Prior to this role, he was the general manager of AWS infrastructure automation platforms and director of product for the latter field.
Before joining AWS in 2020, Balakumar spent nearly five years at Google Cloud, where he was most recently head of product and technology partnerships for compute and AI infrastructure. He previously worked for nine years at Intel, where he was a senior manager for engineering and product on its RealSense products and enterprise server platforms.