Nvidia CEO Touts Parallel Computing Growth, Offers Look At GPU Roadmap

One of the key trends for Nvidia right now is the growth of CUDA (Compute Unified Device Architecture), Nvidia's programming language for general-purpose GPU computing. In a keynote speech, Huang noted that nine OEMs are now using Nvidia's supercomputer chip architecture, code-named Tesla, compared to just one last year.

Nvidia CEO Jen-Hsun Huang Keynotes At GTC

Huang attributed this and other developments to a revolutionary, world-changing atmosphere for parallel computing.

"The entire area is looking for a breakthrough in computational capability," Huang said. "You can see research everywhere with CUDA all over the papers. We're making deep inroads into all areas of science and engineering."

CUDA has reached all geographies in the world, Huang said, describing it as "the most open and available parallel computing infrastructure in history ".

id
unit-1659132512259
type
Sponsored post

"Parallel computing in the form of CUDA is reaching the masses, the proliferation is continuing, the momentum is strong," Huang said.

Huang also outlined the next frontier Nvidia will cross with CUDA. He introduced CUDA x86, a compiler through which, he said, "you can now deploy applications literally on any computer and any server in the world." Nvidia's goal is to put CUDA in the hands of every developer, engineer, every designer, and researcher in the world, Huang said.

Huang also described applications and contexts for Nvidia's grand ambition, starting with MATLAB, which he called one of the most popular applications in the world.

MATLAB, a numerical computing environment and programming language developed by MathWorks, will support CUDA-accelerated GPUs and make GPU computing instantaneously available to a million users doing important work around the world, Huang said.

Computational biology is another area in which CUDA is having an impact, according to Huang. Researchers need enormous amounts of computational resources to run simulations in order to understand molecular formation, or compare gene sequences, according to a demonstration during the keynote.

Huang also offered a glimpse of Nvidia's future roadmap after Fermi. Its next generation GPUs are code named "Kepler" and "Maxwell," and will arrive in 2011 and 2013, he said. Maxwell will add about sixteen times the performance of Fermi, Huang said in the keynote.

"We are constantly learning about the barriers in achieving the speed of light in parallel computing," he said.

Nvdia also used the conference to announce that IBM, T-Platforms and Cray are all building high-end Tesla-based servers that use Nvidia GPUs to replace CPUs. Huang said the combined footprint of high performance computing OEMs currently using CUDA equals about 85 percent of the world's top systems.