Search
Homepage Rankings and Research Companies Channelcast Marketing Matters CRNtv Events WOTC Cisco Partner Summit Digital 2020 NetApp Digital Newsroom HPE Zone The Business Continuity Center Enterprise Tech Provider Masergy Zenith Partner Program Newsroom HP Reinvent Digital Newsroom Hitachi Vantara Digital Newsroom IBM Newsroom Juniper Newsroom Intel Partner Connect 2021 Avaya Newsroom Experiences That Matter The IoT Integrator Intel Tech Provider Zone NetApp Data Fabric WatchGuard Digital Newsroom

Omni-Path Spin-Out Aims To Help HPC Partners Keep Nvidia In Check

‘One of our competitors is going full solution. They‘re getting deeper and deeper into not needing anybody else. And so that eliminates choice and creates opportunity for us,’ says Phil Murphy, CEO of Intel Omni-Path spin-out Cornelis Networks, in an interview with CRN.

Back 1 ... 3   4   5   6   Next
photo

What would you say are the main points of differentiation between Omni-Path and Mellanox's InfiniBand technology?

People aren‘t going to remember this, but when InfiniBand was being defined -- this was yeah 20 years ago -- it was not being defined for high-performance computing at all. It was really defined to kind of replace the channel interface in IBM mainframes. It was something to connect I/O. Back in those days, servers were based on PCI-X. It was crazy but servers had this 64-bit parallel bus that they were trying to run at 133 megahertz, which was very, very difficult. That’s where they were, and then they had to figure out what to do next. And so Intel had a technology called Next-Generation I/O. IBM, back in the day, was participating, and they had something called Future I/O, and they were both trying to solve the same problem using a high-speed serial interface. That’s where InfiniBand got born. It got born out when the warring parties put their differences [aside], they created the InfiniBand Trade Association to go after solving I/O in the enterprise.

So if you think about what was going on [from] 2000 through 2002 — the dot-com bubble burst, we had 9/11, and we had lots of terrible things going on. Because of that, the original idea behind InfiniBand just went by the wayside. Intel eventually came out with something called, we‘ll call it Arapaho at the time, but it ended up being PCI Express, and that made the whole idea that InfiniBand was originally going after not necessary. InfiniBand was going to replace Fibre Channel over Ethernet in the data center. That never happened. So InfiniBand was looking for a home, and Mellanox pivoted to high-performance computing at that point.

Anyway, the underlying architecture for InfiniBand is not really geared for high-performance computing. Our architecture was built from the ground up to focus on high-performance computing, so it‘s very different. And Mellanox will talk about, there is a war between onload and offload, and offload won. That’s not true at all. What’s really going on here is that what you need to do is you need to strike the right balance between what’s running on the server and what’s running in the adapter, especially in high-performance computing. So, in an I/O paradigm, you have a server, and it’s talking to a couple hundred peripherals, for lack of a better word. So you’ve got some finite number of things that you’re talking to. When you’re in high-performance computing, it’s very different. You’re on a node, and it might be talking to thousands or even tens of thousands of other devices. And so the problem is very different, and so you need to make sure that you have the right level of functionality in the host and the right level of functionality in the adapter. And we strike that perfect balance.

Our technology has huge wins at the at the National Nuclear Security [Administration] branch of the DOE. We won the TLCC-2 project with our [InfiniBand-based QDR] architecture. And then Omni-Path won CTS-1, which is that a follow on to that, because they were able to test our architecture and prove that it scaled far better than the competition.

[Omni-Path Architecture] is better suited for HPC because it was built from the ground up with that focus. At its inception, InfiniBand was not targeting HPC, but rather replacing PCI-X and potentially the local and storage area networks in the data center. Because of that, the initial architecture and verbs-based software infrastructure were optimized for that market. On the other hand, Omni-Path is a combination of technologies originally developed by Cray and by QLogic, each with a singular focus on the needs of the HPC market.

 
 
Back 1 ... 3   4   5   6   Next

sponsored resources