Nvidia's Top Brain Talks Future of Graphics Computing

Kirk, delivering a chalk talk on "The Future of 3D Graphics' in San Francisco Friday, touted the advantages of GPU-based parallel computing for powering applications related to oil and gas exploration, computational finance and other computational modeling projects, as well as for faster, more powerful hybrid rendering within the graphics discipline itself. Because GPU computing is already a "data parallel process," the work of breaking apart computing problems into smaller sets of instructions to be carried out concurrently is more easily done on GPUs than on multi-core CPUs, Kirk said.

Describing a kind of Moore's Law on steroids, he promised 100x performance gains in real-world applications, just as soon as people take advantage of Nvidia's General-Purpose computing on GPUs (GPGPU) initiatives that have resulted in some 50 million Nvidia GPUs already shipped that are capable of running the CUDA programming language for parallel computing.

"This is truly the democratization of supercomputing. We ship a million parallel units a week," Kirk said.

CUDA or Compute Unified Device Architecture, is a C programming language developed by Santa Clara, Calif.-based Nvidia that allows GPGPU programmers to code algorithms for execution on graphics processors. Currently, it's possible to run CUDA on Nvidia's GeForce desktop chipsets, as well as its Quadro workstation and Tesla high-performance compute products, and according to Kirk the graphics chipmaker recently released an SDK for the Macintosh operating system.

id
unit-1659132512259
type
Sponsored post

In addition to his prediction about GPU-powered supercomputers, Kirk touched on the tremendous potential of work being done by companies like Evolved Machines, which builds simulated models of organic neural circuit growth using accelerated GPUs.

"That means we're learning how to produce a computational model of the sense of smell or vision recognition," he said. Asked whether technology which promises self-wiring synthetic neural circuit arrays might presage the onset of "A.I. overlords," Kirk laughed but demurred from answering.

In a question-and-answer session following the talk, Kirk was asked about Nvidia rival Advanced Micro Devices' own GPGPU offerings from its ATI division, such as the Close To Metal open thin hardware interface and FireStream stream computing processor.

"ATI let parallel computing sort of happen to them, whereas we actually went out and built a machine to do it," he said, tipping his own company as the prime mover in GPU computing.

If so, we'll know who to blame when self-aware robots start taking over the world.