Nvidia’s A100 is the chipmaker’s flagship data center GPU for inference and training, and while it was first introduced last year, it continues to dominate multiple benchmarks for AI performance. The chipmaker recently announced that the A100 broke 16 AI performance records in the latest MLPerf benchmarks, which Nvidia says makes the GPU the fastest for training performance among commercially available products. The A100 is now available in 40GB and 80GB memory models across PCIe and AXM form factors. The company says the A100 can outperform its previous-generation V100 and T4 GPUs several times over. The A100’s top features include multi-instance GPU, structural sparsity and support for the new TF32 format.