Search
Homepage Rankings and Research Companies Channelcast Marketing Matters CRNtv Events WOTC Jobs HPE Zone Masergy Zenith Partner Program Newsroom Intel Partner Connect Digital Newsroom Dell Technologies Newsroom IBM Newsroom Juniper Newsroom The IoT Integrator NetApp Data Fabric Intel Tech Provider Zone

10 Servers Using The New Nvidia A100 GPUs

These GPU servers offer a range of configuration options for organizations that want to take advantage of Nvidia‘s new A100 GPU for training and inference acceleration.

1   2   3   ... 11 Next

New Servers Drive Acceleration For Training And Inference

It‘s only been a few months since Nvidia launched its new A100 data center GPU capable of delivering accelerated performance for deep learning training and inference. But even in the early innings, the chipmaker said the GPU is already driving "meaningful" revenue, thanks to hyperscaler adoption.

The Santa Clara, Calif.-based company has made its A100 GPU available for the server market in two form factors: SXM, which requires Nvidia‘s HGX A100 compute board, and PCIe, which has made the A100 available for a much broader range of server options.

[Related: Nvidia's Ian Buck: A100 GPU Will 'Future-Proof' Data Centers For AI ]

Based on Nvidia‘s 7-nanometer Ampere architecture, the company has pitched the A100 as a game-changing GPU that can deliver high and flexible performance for both scale-up and scale-out data centers, thanks in part to its multi-instance GPU feature. The A100 also comes with 40 GB of GPU HBM2 memory and can drive 1.6 TBps in memory bandwidth.

Nvidia‘s A100 SXM GPU was custom-designed to support maximum scalability, with the ability to interconnect 16 A100 GPUs using Nvidia’s NVLink and NVSwitch interconnect technology, which provides nearly 10 times the bandwidth of PCIe 4.0.

The A100 PCIe GPU, on the other hand, has lower performance and scalability, with the ability to only link two GPUs with NVLink, but its advantage is much wider server support, making it much easier to integrate in existing data center infrastructure.

“A100 PCIe provides great performance for applications that scale to one or two GPU at a time, including AI inference and some HPC applications,” Paresh Kharya, Nvidia’s senior director of product management for accelerated computing, said in June. “The A100 SXM configuration, on the other hand, provides the highest application performance with 400 watts of TDP. This configuration is ideal for customers with applications scaling to multiple GPUs in a server as well as across servers.”

When Nvidia unveiled the A100 PCIe GPU in June, the company said Dell Technologies, Cisco Systems and several other OEMs would release more than 50 A100-based servers this year.

What follows are 10 A100 servers that are available now or coming out soon, from Nvidia‘s DGX A100 AI system to Gigabyte’s G492-Z51 that can support up to 10 A100s.

 
 
1   2   3   ... 11 Next

sponsored resources