Surprise, Surprise: Nvidia Pushes GPU Computing

Nvidia had its big boy pants on Tuesday, dedicating the entirety of an on-campus Technology Editor's Day to parallel programming on graphics processors, new hardware optimized for same and the growing number of software developers downloading the graphics chip maker's CUDA developer kit to leverage GPU multi-thread capabilities for applications in such fields as finance, medical imaging and particle accelerator modeling.

For a company generally associated with fragging pixilated super soldiers, Santa Clara, Calif.-based Nvidia wanted to talk about anything but at the event, instead trotting out a collection of parallel computing partners as far removed from gaming enthusiasts as is possible while still focusing on graphics-optimized silicon's real-world uses.

Which isn't to say that Nvidia's reputation with the gaming set doesn't have its benefits. Amitabh Varshney, a professor of computer science at the University of Maryland, was on hand Tuesday to testify that the chip maker's hipness quotient is a great recruiting tool for attracting young programmers to parallel computing.

"There's a coolness factor around GPUs," Varshney said during his presentation at Tuesday's event. "We see students get into computer programming because they want to develop games. And then CUDA draws them in to other sorts of programming for other applications. It's sort of like a bait-and-switch."

id
unit-1659132512259
type
Sponsored post

Nvidia partners like Tech-X's Peter Messmer, VP of the company's Space Applications Group, described how parallel computing and the CUDA (Compute Unified Device Architecture) programming language has helped to accelerate data analysis for particle acceleration modeling applications. It turns out that graphics chips, designed to produce and run multiple single-function threads, are much better at certain computing processes than CPUs.

That's not exactly news, of course. But according to Nvidia's Andy Keane, GM of the GPU Computing business unit, and CUDA developer Ian Buck, what is news is that a great many more types of computing than just graphics can be performed more efficiently on GPUs.

As a company that only makes graphics processors and chipsets, it's obviously in Nvidia's interest to play up the uses of the GPU. To the chip maker's credit, though, Nvidia has invested a good deal of time and money in developing its CUDA language and developer kit, as well as partnering with the likes of Tech-X to expand its reach.

CUDA is now in every graphics driver Nvidia offers and free downloads of the CUDA developer kit have been ramping steadily each month since the initial release of the beta by Nvidia last year. The chip maker is also developing a Fortran compiler for GPU computing, according to Buck, who added that numerics libraries and a C compiler are available now.

"The mantra with CUDA and its design was to start with C. We are working with Fortran as well because the HPC space is heavily invested in Fortran," Buck said. "C++ is something a lot of ISVs are interested in. CUDA is primarily C today, but we've pulled in some C++ features and that is a priority for us."

Next: Accelerated Computing

Nvidia's investment in parallel computing has already paid considerable dividends for TechniScan Medical Systems, said Jim Hardwick, senior software engineer at the Salt Lake City-based medical imaging developer.

TechniScan had a problem with its scanning technology for rendering 3D images of the female breast for purposes of diagnosing breast cancer and other malignancies. The company had solved the problem of scanning the breast and producing an image for clinicians and radiologists to analyze, said Hardwick, but from start to finish the process took 2 1/2 hours.

That was too long for TechniScan's customers, who wanted to be able to discuss the imaging results with patients in a single visit.

Long story short, Hardwick, a self-described "casual gamer," learned about developments in GPU computing and CUDA via his interest in Nvidia's gaming platforms. He convinced his boss to build a $600 GPU-based platform to run a big chunk of the company's proprietary algorithm, to test it against their existing Intel Pentium-based cluster.

"We had to find the biggest chunk of the algorithm that took most time. It was the part that has to simulate an ultrasound propagating through the breast tissue. We increased efficiency by 16 times on this chunk of the algorithm. That was enough for us to take it out of the back room and port the rest of the algorithm onto the GPU setup," Hardwick said.

TechniScan eventually was able to increase the operation timing efficiency of portions of its algorithm from 16x up to 320x. The end result was the company got its breast imaging process down to 16 minutes from start to finish, enough to go to market with a product that met the single-visit requirement of its customers.

What's next for Nvidia and its GPU computing efforts? Expect more optimized hardware solutions, said Keane, who hinted at specific product releases in the coming weeks. Will all of this ever trickle down to more mainstream systems and applications, or is it limited to the HPC and supercomputing arenas?

Maybe that's the wrong question, said Jack Collins, manager of scientific computing at the National Cancer Institute's Advanced Biomedical Computing Center in Frederick, Md. Maybe there's a paradigm shift ready to happen that involves more people gaining access to HPC development processes.

"The reason we have all this Linux and other open-source programming is because all these people had access to PCs. What we need now is for more people to be exposed to CUDA and HPC. The real development happens when people hear about CUDA and go home and mess around with it, experiment with it on their own GPUs," Collins said.

"Everybody has a GPU, so why not?"