Search
Homepage This page's url is: -crn- Rankings and Research Companies Channelcast Marketing Matters CRNtv Events WOTC Jobs HPE Discover 2019 News Cisco Partner Summit 2019 News Cisco Wi-Fi 6 Newsroom Dell Technologies Newsroom Hitachi Vantara Newsroom HP Reinvent Newsroom IBM Newsroom Juniper NXTWORK 2019 News Lenovo Newsroom NetApp Insight 2019 News Cisco Live Newsroom HPE Zone Tech Provider Zone

Ian Buck On 5 Big Bets Nvidia Is Making In 2020

'We're reaching a point where supercomputers that are being built now are all going to be AI supercomputers,' Nvidia data center exec Ian Buck tells CRN of the chipmaker's AI ambitions.

Back 1 ... 3   4   5   6  
photo

AI Supercomputers

What encapsulates many of Nvidia's AI efforts is the work the chipmaker is doing in high-performance computing, where the company hopes to turn HPC systems into "AI supercomputers."

"We're reaching a point where supercomputers that are being built now are all going to be AI supercomputers," Buck told CRN, pointing to the U.S. government taking a leadership role on the convergence of HPC and AI as an important sign of progress.

One of the core ingredients for these efforts is Nvidia's Tesla V100 data center GPUs, which can deliver 100 teraFLOPS, or trillion floating points operations per second, in deep learning performance.

Buck said 27,000 of Nvidia's V100 GPUs are inside Oak Ridge National Laboratory's Summit supercomputer in Tennessee, where scientists can use deep learning to identify extreme weather patterns from high-resolution climate simulations.

"That's only one example of many places where AI's going to accelerate scientific discovery, whatever that might be," he said. "So that's the exciting part: getting this technology in the hands of researchers who can turn around and do something that no one ever thought was possible."

Thanks to Nvidia's high-performance AI efforts, the company has become increasingly prominent in the Top 500 list of the world's most powerful supercomputers. For instance, the company's DGX Superpod, which consists of 96 DGX-2H systems and 1,536 V100 GPUs, ranks at No. 22 on the Top 500, just above the National Center for High Performance Computing in Taiwan.

"This network actually can train the ResNet-50 in about 80 Seconds, which is ridiculously fast," Buck said in his GTC talk, referring to the well-known deep neural network.

 
 
Back 1 ... 3   4   5   6  

sponsored resources