November 21, 2002 2:17 PM PST
InfiniBand reborn for supercomputing
Los Alamos National Laboratory has installed a major supercomputer made of 128 computers interconnected by InfiniBand, and a host of InfiniBand companies announced products this week at the SC2002 supercomputing show in Baltimore.
The surge in support is a reversal of fortune for InfiniBand, a standard initially developed by computing giants including IBM, Intel, Hewlett-Packard, Compaq Computer, Dell Computer and Sun Microsystems to succeed the omnipresent PCI technology used to plug devices such as network cards into computers.
If InfiniBand catches on in supercomputing, it will threaten niche companies such as Myricom and Quadrics that currently sell proprietary networking hardware that does much the same thing as InfiniBand.
"The high-performance computing interconnects these days are really a hodgepodge of proprietary interconnects that all do basically the same thing," said Illuminata analyst Gordon Haff. "The idea of having a high-performance, low-overhead interconnect that everyone can agree on is pretty appealing in that space. I don't see how those smaller niche interconnects can prevail."
In recent years, "Beowulf clusters" have caught on as a way to assemble supercomputers out of interconnected inexpensive Linux servers.
InfiniBand: myths and facts
CNET White Papers
InfiniBand for embedded systems
CNET White Papers
InfiniBand may not have dazzled the computer industry, but it has reached data transfer speeds yet to be attained through more ordinary networking technologies such as Ethernet or Fibre Channel. The "4x" version of InfiniBand can transfer data at 10 gigabits per second, and there's a 12x version in the works. Mainstream Ethernet adoption is just reaching 1 gigabit per second, while Fibre Channel is now standardized at 2 gigabits per second.
InfiniBand isn't cheap, but supercomputer customers are used to paying a premium for better performance. One appealing feature of Beowulf clusters is that the same basic software works on inexpensive models with Ethernet connections and a few computers, and on high-end models with fast networking and thousands of systems.
Hopping on the InifniBandwagon
Dell Computer, whose coming "modular" computers will incorporate InfiniBand, is testing InfiniBand clusters in its labs as an option for high-performance computing, the company said.
Other companies are also getting involved, many of them announcing their plans at the SC2002 conference this week. Among them are Paceline Systems and InfiniSwitch, which make high-speed switches to connect InfiniBand-enabled devices.
Paceline announced a promotional kit for high-performance computing customers and an agreement with Abba Technologies to sell its hardware to supercomputer customers.
Paceline also is working with a smaller company, MPI Software Technology, to create a version of crucial Beowulf software for InfiniBand clusters. That software, called the Message Passing Interface (MPI), is an open-source program that governs how data is exchanged between different computers with their own memory. MPI Software Technology sells a commercial version of the program called MPI/Pro.
The starter kit costs $9,995 for a system with a Paceline 4100 switch, four adapter cards so servers can be connected, the MPI/Pro software and cables. Evaluation units are available now, with general availability scheduled for February 2003.
TopSpin, which wants to reach mainstream commercial customers as well as supercomputer buyers, also is working with MPI Software Technology. Its hardware is used in the cluster in Los Alamos.
The Los Alamos system uses 128 dual-Xeon computers from Promicro Systems.
Another company trying to benefit from the supercomputing market is JNI, which makes InfiniBand cards that plug into servers. The company announced two new cards--one using Mellanox chips and the other using IBM chips--each with two InfiniBand ports. MPI Software Technology supports the cards, JNI said.