Nvidia CEO Huang Speaks at GTC

Huang described the "buzz" around GPUs, and the momentum of CUDA, an all-encompassing toolkit for Nvidia developers, and the foundation for many of its GPU computing solutions on display at GTC.

Nvidia's current chip architecture, code-named Tesla, is being used by nine OEMs this year, up from just one last year, Huang said. He attributed this and other developments to a revolutionary, world-changing atmosphere for parallel computing.

"The entire area is looking for a breakthrough in computational capability," Huang said. "You can see research everywhere with CUDA all over the papers. We're making deep inroads into all areas of science and engineering," he said.

"We all know CUDA has reached all geographies in the world, from every single PC in the world," Huang said, adding, "we now know that if you develop applications for it, you can deploy it from every enterprise server in the world. It's the most open and available parallel computing infrastructure in history," Huang stated, while setting up the next frontier to cross--asking where do we go from here?

Sponsored post

Huang introduced CUDA x86, a compiler through which, he said, "you can now deploy applications literally on any computer and any server in the world."

The goal is to put CUDA in the hands of every developer, engineer, every designer, and researcher in the world."

Huang then set about describing applications and contexts for his grand ambition, starting with Matlab, which he called one of the most popular applications in the world, with toolkits for everything, including parallel computing. Mattlab will support CUDA-accelerated GPUs and experience dramatic speed-ups, making GPU computing instantaneously available to a million users doing important work around the world, Huang said.

Huang related similar excitement about computational biology. Researchers need enormous amounts of computational resources to run simulations in order to understand molecular formation, or compare gene sequences, according to a demonstration during the keynote.

Huang then demonstrated the capabilities of a CUDA application called amber, with the Kraken supercomputer, which he called the world's largest amber machine, syntheisizing a composite image from eight Fermi GPUs.

He then introduced 3DS Max, a modeling, animate and rendering software developer as an important partner of Nvidia's. Together, Huang said, they've developed the capability to send a specific positioning to the cloud and then use 32 Fermi processors working on the same image to create a fully, physically real photosimulation in real time.

The other major announcement from Tuesday morning's keynote involved Nvidia's new systems for three OEMs, including the IBM blade center, which Huang said would allow Nvidia to expand the reach and the "market footprint " of CUDA. Currently, the combined footprint of OEMs using CUDA equals about 85 percent of the world's top systems, he said.

"Parallel computing in the form of CUDA is reaching the masses, the proliferation is continuing, the momentum is strong," he said.

After demonstrations of adobe imaging and graphics technology applications for heart surgery, Huang brought back the question of where things stand in GPU technology, and answered with a CUDA/ GPU roadmap, detailing the expansion of performance per watt over time, and the next few architecture code names after Fermi.

After Fermi and Tessla comes Maxwell, which he said would add about sixteen times the performance of Fermi in just a few more years. Along with the acceleration of processor power, Huang promised to offer preemption and virtual memory in future Nvidia chip designs.

"We are constantly learning about the barriers in achieving the speed of light in parallel computing," he said.