Celebrating The GPU: Seven Take-Aways From Nvidia’s GTC 20124:00 PM EST Mon. May. 21, 2012
Chip maker Nvidia has been pioneering the GPU market for years, and 2012 is shaping up to be no exception. Its GPU Technology Conference -- "GTC" for short -- took place last week in San Jose, Calif., and played host to a number of new chip-related announcements, ranging from moon landings to virtualization to gaming.
Here are the top seven, can't-miss take-aways for any GPU lovers that weren't able to be part of the action.
Nvidia kicked off GTC 2012 with the unveiling of its Tesla K10 GPU based on its next-gen Kepler architecture. The K10, according to Nvidia, is three times more energy-efficient and packs twice the processing punch compared to its Fermi-based predecessors.
Kepler also allows for two Tesla K10 GPUs to be used on a single compute accelerator board. The results, Nvidia said, is an aggregate performance of 4.58 teraflops and 320-GBps memory bandwidth, making the Tesla K10 accelerating platform the highest throughput GPU accelerator on the market today.
The K10 news was coupled with that of the new Tesla K20 GPU, which won't officially launch until the fourth quarter.
Both GPUs are aimed at the high-performance computing markets, with the K10 specifically optimized for the oil and gas and defense industries.
If desktops can be virtualized, why can’t GPUs? According to Nvidia, they can.
The chip giant said that its new VGX platform enables IT teams to deploy virtualized desktops with the compute and graphics performance identical to that of an actual GPU-accelerated PC or workstation. VGX even enables the virtualization of compute-intensive 3-D design and simulation applications on smaller devices such as tablets, allowing users to view them on-the-go rather than stay glued to a clunky workstation.
IT also can use the VGX platform on corporate networks to deliver virtual desktops to employee-owned devices as the BYOD trend heats up.
Nvidia's CUDA toolkit is a resource for C and C++ developers creating GPU-accelerated apps, arming them with a number of GPU-specific tools, including compilers, programming guides and debugging methods.
And, according to Nvidia, response to the kit has been off the charts. Sumit Gupta, manager of the Tesla line at Nvidia, told CRN that there is one CUDA toolkit download every 60 seconds. While the kit is free, this is still good news for Nvidia; it means demand for GPU-accelerated apps is on the rise.
According to stats from the chip maker, more than 50 percent of today's top supercomputing apps use GPUs, compared to only 15 percent a year ago.
To Nvidia, gaming is serious business, which means it's just as deserving of the latest-and-greatest technologies as the rest of the PC industry.
That's why it used its GTC 2012 event to unleash its new GeForce Grid technology, which lets gamers stream games from the cloud onto a series of displays, such as notebooks, TVs, tablets and smartphones.
According to Nvidia, many early iterations of cloud gaming platforms perform slowly or, worse yet, sacrifice the quality of the graphics. But with GeForce Grid, Nvidia said it has made game streaming "as common as renting a movie online," with features such as low server latency and reduced power consumption.
And, the chip maker said, graphics quality of course will be top-notch with the inclusion of Nvidia's next-generation GeForce GPUs.
Giving a shout-out to its developer community, Nvidia unveiled Nsight, Eclipse Edition, an integrated development environment (IDE) for building GPU-accelerated apps for Linux and Mac operating systems.
Part of Nvidia's next-gen CUDA 5 software toolkit, Nsight grants developers access to a slew of debugging and profiling tools to facilitate the app-building process. Auto-completion features and integrated code samples, for example, help navigate developers through their programming.
"Nvidia Nsight is the ultimate development platform for heterogeneous computing," said Ian Buck, general manager of GPU computing software at Nvidia, in a statement. "Whether you're a graphics or HPC developer, Nsight makes it easy to develop parallel code for GPUs and CPUs using your preferred IDE."
The term "GPU" tends to call to mind video games and supercomputing. But these little chips can be used for much more than that.
Nvidia's Gupta told CRN that the GPU is adopted more broadly today than ever in markets ranging from government and defense, to manufacturing, to life sciences. They're being leveraged in totally new ways to aid space exploration and crime fighting, too. "GPUs are being used for planning a mission to the moon, GPUs are being used to accelerate or enable more accurate fingerprint matching, and GPUs are being used for radio astronomy to essentially study Einstein's law of relativity," Gupta said.
As the GPU evolves and takes on more of the responsibilities from the traditional CPU, new use cases will emerge. Gupta explained that, in the past, GPUs had to wait for the CPU to "tell them what to do." Now, with new features like dynamic parallelism, the GPU can adapt to data directly and more independently from the CPU.
Nvidia wasn't alone this week in its praise of the almighty GPU. Other organizations, including Dell and the Lego Group -- maker of the Lego building blocks that have been a childhood staple for decades -- shared stories of GPU success.
Dell unveiled what it claims to be the world's first multiuser 2U rack workstation for virtualized 3-D workloads, called the Precision R5500. The new workstation is targeted at users running 3-D workloads in fields including engineering, medicine, media and entertainment, and finance. It runs on Nvidia's Quadro and Tesla GPUs.
The Lego Group used a GTC breakout session to describe how it leveraged the CUDA platform to free up compute resources and cut down on costs during its manufacturing process.