Intel, AMD Lock Horns Over High-Performance Computing Prowess

ARTICLE TITLE HERE

The competition between Intel and AMD is amping up in the data center as the two semiconductor companies prepare to release next-generation server products that target high-performance computing, artificial intelligence and other performance-intensive workloads.

In the week leading up to the 2018 Supercomputing Conference in Dallas and during the event, both companies detailed how their upcoming server CPUs will provide performance advantage over each other's competing products. Intel still maintains the largest share of processors in the world's top 500 supercomputers at 95.2 percent, but now more than ever, the semiconductor giant is facing increased competition, not just from AMD but Arm, IBM and Nvidia as well.

[Related: Samsung Set To Surpass Intel Semiconductor Sales For Second Year]

In a call with journalists last Friday, Intel executive Rajeeb Hazra detailed the company's multi-pronged approach to powering the next generation of HPC workloads. The plan includes using Intel's recently unveiled 48-core Xeon Cascade Lake CPU, Intel Optane memory and storage, and the company's connectivity hardware, such as the Intel Omni-Path Architecture. Software is also playing an important role.

id
unit-1659132512259
type
Sponsored post

"This is the path forward as we look at the next era beyond petascale to kind of exascale class of high-performance computing and AI at scale," said Hazra, corporate vice president and general manager of Intel's enterprise and government business.

War Of The Benchmarks, War Of The Cores

In Hazra's presentation, he expanded upon Intel's previous highlights of the new Xeon Cascade Lake Advanced Performance CPU, which comes with up to 48 cores, and shared more benchmark tests against AMD's current top-performing EPYC 7601 CPU for servers.

In addition to the previously disclosed Linpack floating point computing power and Stream Triad memory bandwidth tests, which show a 3.4x and 1.3x performance boost over AMD, respectively, Hazra shared five more tests showing performance gains ranging from 1.5x to 3.1x. The new tests covered HPC applications, including simulations for quantum chromodynamics and parallel molecular dynamics.

The new benchmarks were a continuation of the one-upmanship Intel and AMD started last week.

Two days after Intel unveiled the new Xeon Cascade Lake, AMD showed off its next-generation 7-nanometer processor architecture, along with the first chip, a server CPU known as EPYC Rome, that will use the architecture. The new EPYC CPU will pack up to 64 cores, 128 threads and eight memory channels, compared to Xeon Cascade Lake's 48 cores and 12 memory channels. (It's not clear yet if the new Xeon CPU will support hyper-threading, which would equate to 96 threads.)

During AMD's presentation, dubbed the Next Horizon event, the company demonstrated the new 64-core EPYC CPU performing faster in a single-socket configuration against Intel's current top-line server CPU, the Xeon Platinum 8180M, which has 56 cores total in a dual-socket configuration. AMD's new EPYC CPU is currently sampling with customers and will launch sometime in 2019. Intel's Xeon Cascade Lake, on the other hand, will be released in the first half of next year.

Intel Focuses On Expanded Hardware And Software Portfolio

Intel's 10-nanometer CPU architecture has been delayed for multiple years and now won't see a wide release until the holiday 2019 season with a client CPU. The company's 10nm Xeon server CPU, code-named Ice Lake, won't come out until after then, which has raised questions about whether Intel will be able to keep up with AMD as it prepares to release its 7nm EPYC Rome CPU in 2019.

Asked about AMD's future roadmap, Hazra said he's confident Intel will continue to provide competitive products for HPC and AI workloads through traditional advances in processors as well as a "vibrant software ecosystem" and other products, such as Optane memory and interconnect components.

"We are very confident that both in the evolution of this step but also in the revolution needed to get to the next level of computing, we have unparalleled assets," Hazra said. "And those assets are not just around cores and frequencies and things like that of the past, but around how we put a diverse set of IP together, how we actually enable standards in the ecosystem to use that and harness the energy from both software and hardware out there on those platforms."

Optane DC persistent memory, Intel's new non-volatile memory product for data centers, fills in an important gap in the memory-storage hierarchy, Hazra said, by providing persistence and higher memory capacity that brings larger amounts of data closer to the processor. This can improve existing HPC capabilities, such as checkpoint restart, and enable new kinds of systems, obviating the need for lower levels of storage and making it more effective and cost-effective.

Texas Advanced Computing Center's Frontera is the first supercomputer to adopt Optane DC persistent memory, which works with Xeon Cascade Lake and enables near "instant boots" of full racks and advanced check-pointing, according to Hazra.

"This becomes a wonderful way to store your meta data caches and make storage computation far more effective and efficient," Hazra said.

On the software side, the upcoming Xeon Cascade Lake will support a new feature called DL Boost that accelerates deep learning inference by a factor of 17 over Intel's previous generation of Xeon CPUs.

AMD Boasts Supercomputer Win

AMD currently does not have any EPYC server CPUs in the world's top 500 supercomputers, but that is expected to change next year when the University of Stuttgart in Germany deploys its new Hewlett Packard Enterprise supercomputer that will use AMD's upcoming EPYC Rome CPUs.

The university said the upcoming system, which was announced at Supercomputing 2018, will be the world's fastest supercomputer for industrial production, providing speeds that are 3.5 times faster than its current system.

“AMD has a rich history in high performance computing and the EPYC processors excel in leadership floating point performance," Forrest Norrod, head of AMD's Datacenter and Embedded Systems Group, said in a statement. "This means better and faster outcomes by researchers using the Hawk supercomputer on HPC workloads like simulation, computational fluid dynamics and machine learning.”

In other Supercomputing 2018 announcements, AMD said that it will launch the new high-frequency EPYC 7371 CPU in January that will benefit workloads for electronic design automation, high-frequency trading and HPC. The company also said a new HPC system at Lawrence Livermore National Laboratory’s High Performance Computing Innovation Center will use both AMD EPYC CPUs and Radeon Instinct GPUs, demonstrating AMD's gains are also extending to its graphics cards.

At AMD's Next Horizon event last week, the company revealed its next-generation 7nm GPUs for data centers, called the Radeon Instinct MI60 and MI50. The MI60, which is due out later this year, features two times the density and a 1.25x performance gain over AMD's previous top-line graphics card while using 50 percent less power, and supports deep learning, HPC, rendering and other high-end use cases.

In addition, AMD announced a new version of its ROCM open software platform that is meant to accelerate GPU-enabled HPC applications.

HPC Partner Says Intel Is Making Price Concessions

AMD wasn't the only Intel competitor that announced gains in HPC this week at Supercomputing 2018. Nvidia saw a 48 percent increase in the number of supercomputers using its GPU accelerators, with 122 out of the world's top 500. Arm, owned by Japan's SoftBank, made it into a leading supercomputer for the first time with a new HPE system using Arm-based Cavium ThunderX2 chips. IBM, on the other hand, laid claim to the world's top two supercomputers with its Power9 processors.

The increased competition in the processor realm is a welcome development to Intel HPC partners like Dominic Daninger, vice president of engineering at Nor-Tech, a Burnsville, Minn.-based system builder.

"Competition always makes things healthier," he said.

Nor-Tech recently added an AMD EPYC-based demo supercomputer and has multiple customers lined up to switch from Intel Xeon to AMD EPYC in the next quarter, according to Daninger.

The executive said he's seeing increased interest from customers for AMD EPYC CPUs because of their higher core count and increased memory bandwidth. While more cores can become prohibitively expensive with some proprietary simulation applications due to per-core licensing costs, there are some open-source computational fluid dynamics applications that remove that cost burden, Daninger said. There are also some companies running their own proprietary software without those extra costs.

AMD has generated so much interest that Nor-Tech has been able to get lower prices from Intel for its Xeon CPUs, according to Daninger. He said it happens when the "competition has been deemed real" and Intel could potentially lose out on a customer deal. He declined to elaborate on the price discounts.

"We're definitely seeing opportunities where a few years ago you couldn’t get Intel do to price concessions at all," Daninger said.

An Intel spokesman told CRN that "Intel's pricing practice hasn't materially changed from what we have always had in place." He added, "we continue to listen to our customers and make adjustments on products and prices to better meet their needs."

Daninger said Nor-Tech is also seeing an uptick in interest for Nvidia's GPU accelerators. He said their main advantage is scale because "they've got so many cores in them." He added that Ansys, a provider of engineering simulation software, allows customers to offload computing to GPUs in exchange for a "very modest licensing fee," which creates a good incentive for GPU accelerators.

"Anything where they can get more performance for a modest increase in a licensing fee is a big deal," he said.