Intel Kills 2nd-Gen Omni-Path Interconnect For HPC, AI Workloads

'If there's not a road map, it wouldn't make sense to commit down Omni-Path's route. It's a one-trick pony, so to speak,' said one Intel partner of the chipmaker's decision to cancel a second-generation version of its Omni-Path Architecture fabric for high-performance computing workloads.


Intel has halted plans for a second-generation version of its Omni-Path Architecture that would have provided a low-latency, 200-Gbps interconnect for server clusters running high-performance computing and artificial intelligence workloads, the company confirmed to CRN.

Jennifer Huffstetler, vice president and general manager, data center product management and storage, said the second-generation Omni-Path fabric, also known as the OPA 200 series, is no longer on the company's road map. Intel continues to sell, support and maintain the first-generation fabric, the OPA 100 series, she said.

[Related: Jason Kimrey: Intel’s Data-Centric Platform Strategy Is A Winning Hand For Partners]

Sponsored post

"We see connectivity as a critical pillar in delivering the performance and scalability for a modern data center. We're continuing our investment there while we will no longer support the Omni-Path 200," she said in an interview with CRN. "We are continuing to see uptake in the HPC portfolio for OPA 100."

Intel is making further investments in its connectivity portfolio, which includes Ethernet and silicon photonics products, most recently with the chipmaker's acquisition of network programmability specialist Barefoot Networks, she said.

"That's just an example of Intel's strategy to support the end-to-end networking and infrastructure," she said. "They have different requirements between high-performance computing and cloud, so we’ve got two separate investments in those streams."

Huffstetler's comments came after solution providers, who asked to not be identified, told CRN that the Santa Clara, Calif.-based company informed some partners of the OPA 200's cancellation and the OPA 100's change in availability.

According to both partners, Intel is now producing OPA 100 parts on a build-to-order basis. The company has not indicated that it’s working on an alternative product line that could serve as an eventual replacement for Omni-Path Architecture, the sources said.

One of the sources said an Intel representative discouraged partners from planning new designs with the OPA 100 series because its build-to-order status will mean long lead times and delays. Combined with the OPA 200's cancellation, the partner said this means one less product line that partners can invest in as Intel seeks validation in the channel for its data-centric platform strategy.

When asked whether OPA 100 is now build-to-order, Huffstetler said, "I can speak to the fact that we are continuing to sell, maintain and support that product line."

Huffstetler declined to say whether Intel was discouraging partners from using OPA 100 for new designs, only saying that the company announced new capabilities in June and that Intel continues to see the product line as "a key element of our high-performance computing portfolio."

As for whether a new product line will eventually replace Omni-Path Architecture, Huffstetler said the company is "evaluating options for extending the capabilities for high-performance Ethernet switches that we can expand the ability to meet the growing needs of HPC and AI."

"More to come," she added.

OPA 200 Cancellation Gives Ammo To Mellanox, Nvidia

The cancellation of the OPA 200 series will make it difficult for channel partners to further invest in the Omni-Path product line, said one of the sources.

"If there's not a road map, it wouldn't make sense to commit down Omni-Path's route," the partner said. "It's a one-trick pony, so to speak."

It doesn't help that 20-year-old interconnect vendor Mellanox, which GPU powerhouse Nvidia is in the process of acquiring, began shipping its 200-Gbps HDR InfiniBand interconnect offering earlier this year and has already unveiled plans for a 400-Gbps version, according to the partner. Intel reportedly sought to acquire Mellanox before Nvidia disclosed in March that it was paying $6.9 billion to acquire the company.

"[Mellanox] is going to pretty much have the market to themselves," the partner said.

Intel's Omni-Path and Mellanox's HDR InfiniBand are both considered end-to-end interconnect solutions, consisting of adapters, switches, cables and management software. Intel's U.S. distributors for Omni-Path are Arrow, ASI, Avnet, Ingram Micro, Synnex and Tech Data, according to its website.

The partner added that Intel has faced tough competition from Mellanox, whose InfiniBand products are more widely deployed than Omni-Path, making it more difficult for Intel to find an entrance.

"It would have been a huge conversion," the partner said. "It's much easier to expand on what you’ve got."

Scott Hamilton, an HPC solution architect at Atos North America, a Purchase, N.Y.-based solution provider that works with both Intel and Mellanox, said while he had not yet been informed of Intel’s Omni-Path plans, it wouldn't surprise him if Intel stopped development efforts.

The HPC architect said Atos is largely a Mellanox shop when it comes to interconnects. In the handful of customers that did use Omni-Path, Intel provided deep discounts that helped with the customers' tight budgets, according to Hamilton.

"There is a cost benefit, but as far as application performance, system performance, we have not seen a large differentiating factor between Mellanox and Omni-Path," he said.

Intel Had Been Quiet On Omni-Path Strategy

The OPA 200 series was set to launch in 2019—specifically in the second half of the year, according to a road map leaked last year—but Intel has largely been quiet about the second-generation version for several months, even as it provided updates on its data center strategy in April with the launch of the chipmaker's second-generation Xeon Scalable processors and Optane DC persistent memory.

While Intel Omni-Path received a passing mention at its Data-Centric Innovation Day in April, the company did not provide any updates on OPA 200. In addition, the company's new lineup of Xeon Scalable processors did not include any designs that integrated the Omni-Path fabric on the chip unlike select CPUs from the first-generation Xeon Scalable lineup.

Intel was also mum on OPA 200 at the International Supercomputing Conference last month in Germany, where the company last year unveiled its 2019 launch plans for the second-generation interconnect. Intel’s Huffstetler said the company did announce new capabilities at the conference for OPA 100, which included things like storage enhancement, overall performance enhancements and multi-rail capabilities.

An Intel spokesperson said the multi-rail capabilities allow a single node to use two OPA 100 add-in cards to reach 200 gigabit-per-second speeds between the node and the switch.

At the conference in 2018, Intel Marketing Director Joe Yaworski told publication Inside HPCthat OPA 200 would "offer twice the bandwidth performance" and a better price-performance ratio over OPA 100. The OPA 200 series could also help accelerate AI workloads, he added.

"Some of the features that we have added into the product, and some of the performance features, will make it very good for AI, especially AI training and the ability to scale AI training to a large number of nodes," Yaworski said in the 2018 interview.

High-performance interconnects are increasingly seen as an important solution to input and output (I/O) performance bottlenecks in HPC workloads, which rely on high-speed, low-latency communication between compute nodes for applications such as computational fluid dynamics.

For example, while Ethernet is still the dominant interconnect for the world's top 500 supercomputers, Mellanox's InfiniBand has been gaining ground, constituting 25 percent of systems, while Intel's Omni-Path is only in 9.8 percent, according to TOP500.

In the 2018 interview, Yaworski said beyond the supercomputing and public HPC community, Intel Omni-Path sales had been growing with HPC cloud providers and enterprises.

"Over the last two years, [Intel Omni-Path] has received a very large uptake in the commercial side of HPC, so things like automotive, aerospace, manufacturing," he said.

Intel's Omni-Path Ambitions Began With Acquisitions

Intel's development of Omni-Path Architecture began in 2012 when it acquired interconnect assets from HPC vendors Cray and QLogic for $140 million and $125 million, respectively. The company went on to develop its HPC interconnect product line using technologies from those acquisitions, culminating in the high-performance fabric's announcement in 2014.

Meant as a successor to Intel's True Scale fabric, the company pitched Omni-Path as a better alternative to the InfiniBand standard that is championed by Mellanox, reducing the costs, energy and number of components associated with HPC fabrics while increasing their performance, density and reliability, according to a promotional video on Intel's website.

In a Jan. 30 blog post, Intel said that Intel Omni-Path is used "across hundreds of accounts, including customers in government, academic research and commercial enterprise." The company added that OPA has been supported by a "large and growing ecosystem consisting of hardware vendors, storage vendors" as well as open-source and commercial applications. That includes Dell Technologies and Hewlett Packard Enterprise, both of which sell networking switches based on Intel's Omni-Path Architecture.

Hamilton, the HPC architect at Atos North America, said Intel has faced entrenched competition from Mellanox in a market where multiple newcomers have come and gone.

"We've seen it several times in HPC, where a new interconnect comes up every three to five years, then they disappear," he said. "Customers are fearful of going with a new party because of that historical trend."

While Atos has dropped development efforts with Omni-Path for its high-end systems, Hamilton said, he hopes Intel doesn't give up the fight.

"It was nice to see Intel make an effort in the interconnect market, and I hope they stick with it," he said. "We'd love to see them stay in there for the competitiveness."