Jump to content

Fermi (supercomputer)

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Elda.rossi0 (talk | contribs) at 16:10, 29 March 2016 (History). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Fermi
Cineca University Consortium in Casalecchio di Reno (BO)
Activeoperational 2012
SponsorsMinistry of Education, Universities and Research (Italy)
OperatorsThe Members of the Consortium [1]
LocationCineca, Casalecchio di Reno, Italy
ArchitectureIBM BG/Q
5D Torus Interconnect configuration
10,240 Intel PowerPC A2
1.6 GHz with 16 cores each
163.840 cores
Power822 KW
Operating systemCNK[2]
Memory16 GB/node, 1GB/core; 160 TiB
Storage2PByte of scratch space
Speed2.097 PFLOPS
RankingTOP500: 37, 2015-11
PurposeMaterial science, Weather, Climatology, Seismology, Biology, Computational chemistry, Computer science
LegacyRanked 7 on TOP500 when built. [3]
Websitehpc.cineca.it/hardware/fermi

Fermi is a 2.097 -petaFLOPS supercomputer located in Cineca.[4]

History

Supercomputer Fermi BlueGene/Q at Cineca

FERMI is the main HPC computer in CINECA. It was acquired in June 2012 and entered in full production in Aug 8th,the same year. Fermi is the Italian National Tier-0 system for scientific research and is also part of the European HPC infrastructure (PRACE). Its procurement was sponsored by the Ministry of Education, Universities and Research (Italy).

In June 2012, Fermi reached the seventh position on the TOP500 list of fastest supercomputers in the world. [5]

In the Graph500 list of top supercomputers.[6] Fermi reached the fifth position in their benchmark, the system tested at 2,567 gigaTEPS (traversed edges per second).

In the Green500 list of top supercomputers.[7] Fermi reached the fifty-ninth position in their benchmark, the system tested at 2,176.57 MFLOPS/W (Performance per watt).

Specifications

FERMI is a BlueGene/Q[8] system, the last generation of the IBM project for designing supercomputers in the Peta-scale.It is made-up of 10 racks, two MidPlanes each, for a total of 10.240 compute nodes and 163.840 cores.

  • Each Compute Card ("compute node") features an IBM PowerA2 chip with 16 cores working at a frequency of 1.6 GHz, 16 GB of RAM and the network connections. A total of 32 compute nodes are plugged into a so-called Node Card. Then 16 Node Cards are assembled in one MidPlane which is combined with another MidPlane and two I/O drawers to give a rack with a total of 32x32x16 = 16K cores for each rack. On the compute nodes there is a light linux-like kernel (CNK[9] - Compute Node Kernel).
  • Compute nodes are disk-less, I/O functionalities are provided by I/O nodes.
  • The access is done via ssh through the front-end nodes (or login nodes) at login.fermi.cineca.it. On the login nodes there is a complete RedHat Linux distribution (6.2). Parallel applications have to be cross-compiled on the front-end nodes and can only be executed on the partition defined on the compute nodes.

The CINECA configuration is made of 10 racks as follows:

  • 2 racks: 16 I/O nodes per rack (minimum job allocation of 64 nodes - 1024 cores).
  • 8 racks: 8 I/O nodes per rack (minimum job allocation of 128 nodes - 2048 cores).

See also

References

  1. ^ "Consortium of universities". Retrieved 9 March 2016.
  2. ^ "IBM System Blue Gene Solution Blue Gene/Q Application Development". IBM. Retrieved 9 March 2016.
  3. ^ "Jun 2012". TOP500. Retrieved 9 March 2016.
  4. ^ "Nov 2015". TOP500. Retrieved 9 March 2016.
  5. ^ "FERMI". TOP500. Retrieved 9 March 2016.
  6. ^ "The Graph 500 List: November 2015". Graph 500. Retrieved 9 March 2016.
  7. ^ "The Green 500 List: November 2015". Graph 500. Retrieved 9 March 2016.
  8. ^ "Blue Gene". Wikipedia, the free encyclopedia.
  9. ^ "CNK operating system". Wikipedia, the free encyclopedia.

Articles about Fermi and its network

Il Sole 24 ore - in Italian

Datacenter Knowledge