Advertisement

Networking News

CRN’s 2020 Products Of The Year

Kyle Alspach

CRN editors compiled the top partner-friendly products and services that launched over the past year, then turned to solution providers to choose the winners.

PROCESSORS

INTEL XEON SCALABLE, 3RD GEN

WINNER: OVERALL

With Intel’s first batch of third- generation Xeon Scalable processors, the chipmaker is targeting enhanced performance—including for dataintensive AI workloads—in servers

with four and eight sockets. The new processors bring performance gains over Intel’s second-generation Xeon Scalable lineup and introduce an additional instruction set for built-in AI acceleration, which can boost training performance 1.93 times and inference performance 1.9 times when performing single-precision floating point math. The third-generation Xeon Scalable processors feature up to 28 cores, up to 3.1GHz in base frequency, up to 4.3GHz in single-core turbo frequency and up to six channels of DDR4-3200 memory with ECC support. They also support Intel’s new Optane Persistent Memory 200 Series, which can provide more than 225 times faster access to data than a mainstream NAND SSD.

Subcategory Winner—Customer Demand: AMD EPYC, 2nd Gen

AMD’s expanded second-generation, 7-nanometer EPYC lineup adds three processors that feature boost frequencies of up to 3.9GHz and L3 caches reaching 256 MB. Previously, the highest boost frequency achieved by an EPYC Rome processor was 3.4GHz. The three new processors—the 24-core EPYC 7F72, the 16-core EPYC 7F52 and eight-core EPYC 7F32—are being supported by server platforms from Dell EMC, Hewlett Packard Enterprise and Supermicro.

Finalist: AWS Graviton2 (Arm)

AWS Graviton2 processors use 64-bit Arm Neoverse cores with AWS-designed 7-nanometer silicon and provide up to 64 vCPUs, 25 Gbps of enhanced networking and 18 Gbps of EBS bandwidth. In June, AWS launched sixth-generation Amazon EC2 C6g and R6g instances—for compute-intensive workloads and processing large data sets in memory, respectively—which are powered by the Graviton2 processors.

Finalist: Nvidia A100 Tensor Core GPU

Nvidia’s recently launched A100 aims to revolutionize AI, with the ability to perform single-precision floating point math (FP32) for training workloads and eight-bit integer math (INT8) for inference 20 times faster than the V100 GPU that came out in 2017. The A100 also uses Nvidia’s third-generation Tensor Cores that come with a new TF32 For AI format, which enables single-precision floating point acceleration by compressing the number of bits needed to complete math equations.

 
Advertisement
Advertisement
Sponsored Post
Advertisement
Advertisement