AMD hosted a roundtable Tuesday with a number of industry experts to discuss recent developments and future directions for high-performance computing (HPC).
Panelists included Chuck Moore, corporate fellow and chief techology officer at AMD Technology Group; Mike Vildibill, vice president of worldwide sales at Appro; Margaret Williams, senior vice president of HPC Systems at Cray; Charles Wuischpard, president and CEO of Penguin Computing; Bill Mannel, vice president of product marketing at SGI; and Don Clegg, vice president of marketing and business development at Supermicro.
While a range of topics, including cloud computing and on-demand HPC, were discussed, the role of the GPU within heterogeneous computing and integrated chips – a processing model combining both CPUs and GPUs said to be integral to supercomputing – was front and center.
Most panelists affirmed the staying power of heterogeneous computing, along with the importance of the GPU in achieving this longevity. AMD has bet heavily on heterogeneous computing with its Fusion accelerated processing unit (APU) family of integrated chips, which was introduced earlier this year.
"We do see heterogonous computing as here to stay," said Cray’s Margaret Williams, adding that GPUs are becoming more and more crucial to this model.
AMD’s Chuck Moore agreed that GPUs are a fundamental piece to heterogonous computing, but noted that there is still a long way to go before perfecting this model. "GPU computing is still in its infancy," Moore said. "It’s nowhere near matured."
AMD said it is putting significant focus on GPU development and is aiming to make them more vectored for simplified programming. Questions about the future of GPUs have lingered this year with the growing popularity of integrated chips.
However, next generation GPUs have been developed to take on more tasks traditionally handled by the CPU, thereby taking pressure off of the CPU and allowing a system to run faster. For example, Nvidia's Tesla GPUs were recently used to power the world's fastest supercomputer, dubbed Titan, at the Oak Ridge National Lab.
In addition to GPUs, exascale computing – an attempt to push computing capabilities beyond the existing petascale – was another hot topic among panelists.
While confident that exascale computing is attainable, it most likely won’t become a reality for years, panelists discussed.
"[Exascale computing] probably won’t be until 2019 or 2020," Moore projected, with power consumption being one of the main bottlenecks.
Moore noted that customers considering exascale computing need to plan on one million dollars per megawatt used. What’s more, exascale computing would have "an overall affect" on society, Moore explained, requiring the addition of power plants to a "distribution grid that’s already pretty saturated."
Despite HPC and exascale being a work-in-progess, Penguin and Cray, among other industry leaders within the HPC space, have already deployed platforms within various research facilities to enable further analysis of the trends.
Penguin, for instance, just announced Tuesday its installation of what’s claimed to be the world’s first HPC cluster powered by AMD APUs at Sandia National Labs in Albuquerque, New Mexico. The HPC system at Sandia consists of 104 servers and is said to deliver a potential peak performance of 59.6 teraflops.
The panel, overall, maintained an optimistic, forward-looking feel. "The power of computing as it intersects the next generation of sciences, will bring about incredible things," Moore said.