||The topic of this article may not meet Wikipedia's notability guidelines for products and services. (August 2014)|
|This article does not cite any references or sources. (August 2014)|
||It has been suggested that this article be merged into Quadrics. (Discuss) Proposed since May 2014.|
QsNet is a high speed interconnect designed by Quadrics used in HPC clusters, particularly Linux Beowulf Clusters. Although it can be used with TCP/IP; like SCI, Myrinet and Infiniband it is usually used with a communication API such as MPI or SHMEM called from a parallel program.
The interconnect consists of a PCI card in each compute node and one or more dedicated switch chassis. These are connected with a copper cables. Within the switch chassis are a number of line cards that carry Elite switch ASICs. These are internally linked to form a fat tree topology. Like other interconnects such as Myrinet very large systems can be built by using multiple switch chassis arranged as spine (top-level) and leaf (node-level) switches. Such systems are usually called federated networks.
As of 2003[update], there are two generations of QsNet. The older QsNetI was launched in 1998 and used PCI 66-64 cards that had 'elan3' Custom ASIC on them. These gave an MPI bandwidth of around 350 Mbyte/s unidirectional with 5us latency. QsNet II was launched in 2003. It used PCI-X 133 MHz cards that carry 'elan4' ASICs. These gave an MPI bandwidth of 912 Mbyte/s and MPI latency starting from 1.22 μs, performance depends on platform used.
In 2004 Quadrics started releasing small to medium switch stand-alone switch configurations called QsNetII E-Series, these configurations range from the 8 to the 128-way systems.