InfiniBand: Right Technology, Wrong Economy

The reason is simple: IBM, Hewlett-Packard, Intel and Microsoft, all backers of InfiniBand, have scaled back their ambitions for the channel technology because, given tight IT budgets, customers are not about to rip out old servers containing the PCI architecture to replace them with InfiniBand servers. More often then not, customers are looking for ways to get more juice out of the technology they already own.

"Clearly, the timing is not perfect," says Jim Pappas, director of the initiative marketing enterprise platform group at Intel. "This technology will deploy more slowly than we had hoped before the economy went down."

But InfiniBand is far from dead. Ultimately, the channel technology is about providing faster connections between servers and storage and network devices. It has been hailed as the piece that will fix server I/O-processing bottlenecks because it operates at speeds from 2.5 Gbps to 30 Gbps. Vendors say InfiniBand is about bringing mainframe-class I/O to the commodity-server market. And while Intel's funding for the technology was reportedly drastically cut and Microsoft has withdrawn its development efforts, there are about a dozen start-ups acting as foot soldiers for InfiniBand. Companies including Banderacom, InfiniCon Systems, Lane 15 Software, Paceline, Voltaire and even IBM Microelectronics are all moving forward building silicon, systems and software.

Moreover, the technology is being repositioned for specialized areas,especially those that benefit from a low-latency, high-performance interconnect. When you are thinking about InfiniBand, think database applications such as Oracle9i RAC or IBM's DB2 EEE. Or think high-performance computing clusters that do scientific parallel computing, or servers with enterprise applications that can never go down.

id
unit-1659132512259
type
Sponsored post

InfiniBand has already gone through the demonstration phase at shows like Comdex and LinuxWorld. Some are expecting it to officially make an entrance into the market in 2003, mainly as an upgrade to the existing server base. More recently, start-up Voltaire announced that Hitachi agreed to integrate Voltaire's switching software into its servers and storage. Plus, IBM announced general availability of its 4x (10-Gbps) InfiniBand silicon, and Legato Systems and Veritas Software demonstrated their applications running on InfiniBand-enabled server clusters at Intel's Developer Forum.

"InfiniBand clearly has a market," says Charles A. Foley, CEO at InfiniCon Systems, King of Prussia, Pa. "It's going to be a key tool for server-to-server communications and disaggregating the I/O."

No Easy Start

Even as a concept, InfiniBand had a tough time. Back in the late '90s, a schism erupted between Intel and some of its major OEM partners, which included Compaq, HP and IBM. The two camps disagreed philosophically on whether an extension to the PCI bus,which speeds up data transfers between the host computer and its peripheral devices,should be built while InfiniBand was under construction. This extension is called PCI X and still exists today.

This group threatened to go off and build its own version of the switched-fabric interconnect, while Intel planned to develop its version simultaneously. Eventually, the groups came together in August 1999 and formed the InfiniBand Trade Association. A standard was formulated and ratified 14 months later. But in its early days, InfiniBand was being marketed as a replacement to PCI,a move that many in the industry today acknowledge was a mistake.

"Three years ago, InfiniBand was positioned as the end-all, be-all for computer problems," Foley says. "It was going to replace PCI. It was going to facilitate clustering. It was going to be the next-generation storage interconnect. That was too grandiose."

Some still argue that, theoretically, InfiniBand eliminates the need for PCI, initially developed by Intel. "If everything went through InfiniBand, it would be faster and more reliable," says Dr. Tom Bradicich, director of server architecture at IBM. "But there is a practical answer to that. The PCI industry is so big and robust, and end users have vested so much that it's not going away. One could build a server with just pure InfiniBand connectivity if one did not want to leverage the huge PCI industry out there."

But using InfiniBand as a PCI replacement would be overkill, argues InfiniCon's Foley. It would mean using this powerful technology for a finite purpose,mainly to speed up communication between chips residing on the same board. Some say that is not an effective use of a technology that operates at 10-Gbps speeds, and that InfiniBand has a better place when used to increase speeds between whole systems.

"InfiniBand is powerful in its networking capability, not just in its ability as a fat pipe," Foley says.

In fact, one of InfiniBand's key features is it disaggregates the I/O system from the server so the I/O has more autonomy and can operate independently from the CPU. That helps reduce the overhead on the microprocessor and makes for a less complex server system. Consider this: When an IT manager installs a server, he has to configure it for three different types of networks. That means plenty of HBA cards, NICs and cables in between. One InfiniBand host-channel adapter provides connections to all those networks.

"It makes the CPU cheaper and makes a less complex environment," Foley adds.

Another key InfiniBand feature is its remote distributed memory (RDM) access. That gives the I/O the ability to access memory,which holds data and instructions,directly, without involving the CPU, so IT managers can build a computing cluster using low-cost, Intel-based servers to do Herculean computing tasks, IBM's Bradicich explains.

Matter of Timing

While InfiniBand's debut to the market may be ill-timed, no one seems to be knocking its capabilities. "I think InfiniBand is a marvelous creature," says Jon William Toigo, an independent consultant and author of storage books including The Holy Grail of Storage Management. "It has a lot of legs going forward. I just don't see it happening this year or next year."

Others agree with the assertion that its timing is unfortunate. Research firm Giga Information Group recently stated the "adoption rate of InfiniBand servers would have been hampered by the down economy as organizations are not expected to make significant infrastructure investments beyond periodic life-cycle changes." And for its part, Dataquest recently came out with figures that show the economy is still hurting worldwide server shipments, which totaled 1.08 million last quarter. That's an increase of only 0.5 percent over the second quarter of 2001,tantamount to flat growth.

But Intel executives say that just because they have stopped development of InfiniBand chips, it does not mean they aren't bullish on the technology. Intel's Pappas says the company canceled development of those chips for several reasons,namely the economy.

"Everybody knows things aren't as good as we'd like them to be," Pappas says. "We just had to make tough decisions on what we do and don't do. And we came to the realization that there were enough companies building InfiniBand chips that even though we halted production, components would be available so we could still ship InfiniBand on our platforms. We are still very, very much in favor of driving the technology. We are simply not going to build the components at this time."

Still, Intel's halt to chip development isn't necessarily a major drawback. Jim Pike, director of server architecture at Dell, notes that some of the companies developing InfiniBand chips were already on the second-generation version, and since Intel had not even started on that phase, it would have been behind anyway. "Intel was the trailblazer, so everybody was able to take advantage of the stuff they developed [first," he says. "So now there is an ecosystem forming out there that does not require Intel to invest in the silicon."

The way Pike describes it, InfiniBand is in a period commonly known as the "trough of despair." That's when it's no longer considered the new technology that everyone is drawn to with a romantic love. Meanwhile, engineers have had their heads to the grindstone and now are on the cusp of delivering the actual technology.

"People are looking for exciting, new things, and InfiniBand just keeps going," Pike says. "The romantic part has worn off, [but I think you will see a renewed spark once we start demonstrating results."