InfiniBand is a switched fabric communications link used in high-performance computing and enterprise data centers. Its features include high throughput, low latency, quality of service and failover, and it is designed to be scalable. The InfiniBand architecture specification defines a connection between processor nodes and high performance I/O nodes such as storage devices.
InfiniBand forms a superset of the Virtual Interface Architecture.
Like Fibre Channel, PCI Express, Serial ATA, and many other modern interconnects, InfiniBand offers point-to-point bidirectional serial links intended for the connection of processors with high-speed peripherals such as disks. On top of the point to point capabilities, InfiniBand also offers multicast operations as well. It supports several signalling rates and, as with PCI Express, links can be bonded together for additional throughput.
The SDR serial connection's signalling rate is 2.5 gigabit per second (Gbit/s) in each direction per connection. DDR is 5 Gbit/s and QDR is 10 Gbit/s. FDR is 14.0625 Gbit/s and EDR is 25.78125Gbit/s per lane.
For SDR, DDR and QDR, links use 8B/10B encoding — every 10 bits sent carry 8bits of data — making the useful data transmission rate four-fifths the raw rate. Thus single, double, and quad data rates carry 2, 4, or 8 Gbit/s useful data, respectively. For FDR and EDR, links use 64B/66B encoding — every 66 bits sent carry 64bits of data. (Neither of these calculations take into account the additional physical layer overhead requirements for comma characters or protocol requirements such as StartOfFrame and EndOfFrame).
Implementers can aggregate links in units of 4 or 12, called 4X or 12X. A 12X QDR link therefore carries 120 Gbit/s raw, or 96 Gbit/s of useful data. As of 2009cluster and supercomputer interconnects and for inter-switch connections.
most systems use a 4X aggregate, implying a 10 Gbit/s (SDR), 20 Gbit/s (DDR) or 40 Gbit/s (QDR) connection. Larger systems with 12X links are typically used for
Full article ▸