||The topic of this article may not meet Wikipedia's notability guidelines for products and services. (August 2014) (Learn how and when to remove this template message)|
|This article does not cite any sources. (August 2014) (Learn how and when to remove this template message)|
||It has been suggested that this article be merged into Quadrics. (Discuss) Proposed since May 2014.|
QsNet was a high speed interconnect designed by Quadrics used in high-performance computing computer clusters, particularly Linux Beowulf Clusters. Although it can be used with TCP/IP; like SCI, Myrinet and InfiniBand it is usually used with a communication API such as Message Passing Interface (MPI) or SHMEM called from a parallel program.
The interconnect consists of a PCI card in each compute node and one or more dedicated switch chassis. These are connected with a copper cables. Within the switch chassis are a number of line cards that carry Elite switch ASICs. These are internally linked to form a fat tree topology. Like other interconnects such as Myrinet very large systems can be built by using multiple switch chassis arranged as spine (top-level) and leaf (node-level) switches. Such systems were called "federated networks".
As of 2003[update], there were two generations of QsNet. The older QsNetI was announced in 1998 and used PCI 66-64 cards that had 'elan3' Custom ASIC on them. These gave an MPI bandwidth of around 350 Mbyte/s unidirectional with 5us latency. QsNet II was launched in 2003. It used PCI-X 133 MHz cards that carry 'elan4' ASICs. These gave an MPI bandwidth of 912 Mbyte/s and MPI latency starting from 1.22 μs, performance depends on platform used.
In 2004 Quadrics started releasing small to medium switch stand-alone switch configurations called QsNetII E-Series, these configurations range from the 8 to the 128-way systems.