Nvidia says its Tesla GPUs are featured inside a Chinese supercomputer which is expected to top the official list of the world's 500 fastest supercomputers, which is due to be published next week.
China's Tianhe-1A has taken over the supercomputer crown from a system in Oak Ridge National Laboratories in Tennessee, which can achieve up to 2.3 petraflops, Nvidia said on Thursday.
Designed by China's National University of Defense Technology and located at the National Supercomputing Center in the city of Tianjin, Tianhe-1A reaches speeds of up to 2.507 petaflops and features 7,168 Tesla graphics chips from Nvidia and 14,336 central processing units designed by Intel .
"The performance and efficiency of Tianhe-1A was simply not possible without GPUs," Guangming Liu, chief of National Supercomputer Center in Tianjin said in a statement. "The scientific research that is now possible with a system of this scale is almost without limits; we could not be more pleased with the results."
China has invested billions of dollars to build powerful supercomputers, which are used to solve problems in defense, energy, finance, and science. For Nvidia, this area represents broader opportunities within the company's long-term strategy driven by the CUDA parallel processing architecture, codenamed "Fermi."
Next: Investing In CUDA"If you look at the history of the company, Nvidia reinvented the graphics industry and coined this term GPU," said Sumit Gupta, product line manager for Nvidia’s Tesla, in aninterview. "This was the point when we decided to invest in graphics computing with this new CUDA architecture."
"The way to be successful in this business and successful in parallel processing is to get the hardware and software to talk to each other," Gupta said. "This is a very strategic decision for us: to become a computing company rather than simply a graphics company."
Tesla, the youngest of Nvidia's Fermi-based reference designs, initially appeared three years ago. While the GeForce is the consumer brand and Quadro is intended for the business space, Tesla brings parallel processing capabilities to servers and blades. Gupta said Tesla is known for its reliability, and for the flexibility it offers customers in the financial and oil and gas industries as well as scientific and government research, especially by allowing them to check memory capability.
With various industries and researchers "hungry for better computing capabilities," Gupta said Nvidia believes graphics technology as a whole has become a more compute-intensive process. As a result, Nvidia is focusing its energy on CUDA and the expansion of visualization technology, as the company indicated during its GPU Technology Conference in San Jose last month.
Next: Nvidia's Aggressive Roadmap "We have a very aggressive roadmap," he said. "Every 18 to 24 months we come out with a new high-performance architecture."
Gupta said the Tianhe-1A occupies half the size and requires one third of the power it would have taken for CPUs to achieve the same levels of performance. "Anyone can build a powerful system but can you do it cost-effectively? That's the real benefit of GPU computing," Gupta said.