AMD Turns To Ex-Nvidia Star To Jump-start Fusion

Hegde, who helped integrate PhysX hardware acceleration into Nvidia’s CUDA framework after Nvidia’s acquisition of his company, Ageia, was named head of AMD’s Fusion Experience Program on Thursday. In his new role, he said he'll be charged with building the developer ecosystem for what Sunnyvale, Calif.-based AMD calls Accelerated Processing Units, or APUs, as well as spreading the Fusion message to consumers, analysts and media.

“It’s a complex set of things that need to be executed,” Hegde told CRN on Thursday. “We’re making Fusion the center of our road map. When you’re changing the entire ecosystem, changing how developers program a processor, you don’t do that for just one processor. You do it for a road map. Developers want to develop for an entire road map that’s forward-compatible.”

AMD is expected to release its first APU, code-named Llano, in the first half of 2011. That processor and future chips on the Fusion product road map will incorporate both a discrete central processor and a discrete graphics processor on a single silicon die, Hegde said.

“Think of the advantages of that in terms of the buses, the power management capabilities, the tremendous bandwidth, the tremendous performance-per-watt advantage, which is something AMD has always done well and which we will continue to exploit,” he said.

id
unit-1659132512259
type
Sponsored post

Hegde said AMD, which acquired graphics chip maker ATI Technologies in 2006, was in a unique position to bring into the mainstream so-called heterogeneous computing that takes equal advantage of the strengths of both the CPU and the GPU.

“I think there’s a reason why AMD has chosen to go Fusion. Look at the company and its strengths. We can leverage a real competitive advantage,” he said.

AMD would be the company to take heterogeneous computing out of the high-performance compute (HPC) world and into the consumer space, Hegde said. But even though mainstreaming Fusion was AMD’s top priority, he added that AMD’s development of its APUs would not preclude the continued advancement of its discrete graphics product lines -- all the way up to the company’s FireStream graphics cards which compete at the high-end of the GPU market with Nvidia’s Quadro and Tesla products.

“AMD was for many years a top vendor of CPUs and after the merger with ATI, which was one of the world’s top vendors of high-performance GPUs, you now have tremendous capability in both these spaces. And it’s not the capability of a single part or product, but the capability of the experience in both areas that we have in our team.”

Next: AMD Gearing Up For Battle The poaching of Hegde from Santa Clara, Calif.-based Nvidia signals that AMD is gearing up to move aggressively on the GPU and heterogeneous computing fronts after years on the sidelines, said Jon Peddie, principal analyst at Jon Peddie Research.

While ATI Technologies was an important first-mover in GPU computing beginning in the late 1990s with a promising parallel programming initiative at Stanford University, after ATI’s acquisition by AMD in 2006 that effort tailed off, Peddie said.

Nvidia stepped in, investing millions of dollars in developing its proprietary CUDA architecture and programming language while also evangelizing GPU computing and winning over HPC system builders with its Tesla Preferred Program and other loyalty-building initiatives in the custom systems channel.

“ATI, or I should say AMD, took their eye off the ball of GPU computing. But Nvidia didn’t. And when Nvidia commits to something, they put their all into it,” Peddie said. “So today, CUDA is by far the most widely used and robust programming platform for parallel computing that exists in the world.

“So now AMD has its house in order. They’ve completed their smart fab strategy, the Intel lawsuit is behind them and now they’re ready to get really rolling on GPU computing.”

A major difference between the approaches by Nvidia and AMD to GPU computing is that the former has developed its proprietary CUDA framework, while the latter says it’s committed only to open standards like the OpenCL heterogeneous programming language that can work on any vendor’s hardware. Nvidia GPUs also support OpenCL, but CUDA programs will only run on Nvidia hardware.

“Our strategy is to embrace open standards all the way through. We’re comfortable that we’ll have to win with our hardware. But even with that philosophy, you have to deal with the reality that GPU programming is still relatively new,” Hegde said.

“If you look at a GPU and a CPU, and you want heterogeneous computing, quite frankly, it’s not as easy to program a GPU as a CPU, though much progress has been made. So we’ve got some plans in place to simplify that process and kind of hide the complexity from the developer.”

Next: One Partner’s Skepticism Hiring Hegde was a good strategic move by AMD, said one West Coast-based custom system builder who partners closely with Nvidia on HPC accounts. But the system builder, who asked not to be named, also called Hegde’s public role with AMD “curious.”

“If they bring him on board, with his experience on the CUDA team, it just seems more natural that it would be to kick start their Stream initiative, not the Fusion project in 2013 or whatever,” the system builder said, referring to AMD’s seemingly muted effort in recent years to compete with Nvidia in the HPC space with its FireStream products.

“To be frank, the problem with AMD is that the future is always 12 months away with them. AMD has the hardware, but the Stream team is so small and they have limited resources, so unless the push comes from the top that we’re going to invest and do this at the level that Nvidia has with CUDA, it’s not going to happen.”

But Hegde was adamant that GPU computing, and by extension heterogeneous computing, needed to move beyond what he called “an island” in academia and the HPC market.

“The APU essentially delivers on the promise of GPU computing, the promise that’s been around three, four, five years. Some segments of the market have already bought into it, like some portions of HPC,” he said.

“What I think the APU does is take that promise and deliver it to the mainstream. It addresses power-per-watt, which is a huge consideration in the consumer space. Consumer applications typically have smaller, more granular workloads, so movement between the CPU and GPU is crucial. It’s very tough to do this with discrete parts on a platform, but it’s possible with them being on the same die.”

With CUDA, Peddie said Nvidia’s GPU computing initiative is “far out ahead of everybody else in terms of mindshare and developer toolkits and everything else.” But the analyst said AMD is fully committed to changing the game entirely with Fusion.

“The AMD people I spoke to recently said this is bigger than Hammer,” he said, referring to the code name for AMD’s groundbreaking move to a 64-bit processor architecture in 2003.

“They say that Fusion is going to be an even bigger push than that. That’s saying a lot. And Manju is the perfect guy for this.”