Nvidia, IBM Team Up On GPU Server


The forthcoming IBM iDataPlex dx360 M3 systems will be equipped with a pair of Nvidia’s Tesla M2050 GPUs to go along with a dual-CPU configuration, according to Sumit Gupta, senior manager of Nvidia’s Tesla GPU Computing HPC business unit.

“This is the first time that GPUs are part of a mainstream, high-volume product line from a Tier 1 OEM,” said Gupta, who is responsible for business development around Santa Clara, Calif.-based Nvidia’s CUDA programming language for GPUs and CUDA-based GPU computing products.

“We’ve been talking about momentum for a long time with CUDA. A number of textbooks have now been written about how to program a GPU, whether it’s CUDA or OpenCL,” Gupta said. “This is a pretty good achievement, considering the programming language is only three years old.”

The iDataPlex dx360 M3 servers from Armonk, N.Y.-based IBM are intended for high-performance compute installations and deliver a number of performance gains over CPU-based iDataPlex systems, according to Nvidia. The integration of the Tesla parts delivers a 10x increase in performance per node and 65 percent lower acquisitions over previous-generation iDataPlex servers, the graphics chip maker said.

Sponsored post

“Supercomputing is changing -- from CPU-based clusters to massively parallel GPU-based clusters -- and this change is greatly accelerated by IBM’s adoption of GPUs in its new iDataPlex servers,” said Andy Keane, general manager of the Tesla business at Nvidia.

Gupta said Nvidia anticipated that the world’s top supercomputers would all utilize GPUs in the coming months. He said GPU computing and the CUDA programming language were already very useful computational tools in specific areas like oil and gas exploration, financial modeling and supercomputing at universities, but that was only the beginning.

“We’re kind of a startup here. We’re building a business around some key markets. The medical imaging market is interesting, for example, because they have really interesting problems. The CT scanners need to reduce the dosage they’re giving patients, which means they need to use more computational time for rendering CT scans,” he said.

Tackling such problems by sharply slashing rendering times through parallel programming on GPUs was a clear path forward, Gupta said.

Universities around the world are seeing the benefits of GPU computing, he added. More than 335 universities around the world now teach CUDA GPU programming and Nvidia’s CUDA developer toolkit has been downloaded more than 200,000 times, according to the company.