Fermi (supercomputer)
Active | operational 2012 |
---|---|
Sponsors | Ministry of Education, Universities and Research (Italy) |
Operators | The Members of the Consortium [1] |
Location | Cineca, Casalecchio di Reno, Italy |
Architecture | IBM BG/Q 5D Torus Interconnect configuration 10,240 Intel PowerPC A2 1.6 GHz with 16 cores each 163.840 cores |
Power | 822 KW |
Operating system | CNK[2] |
Memory | 16 GB/node, 1GB/core; 160 TiB |
Storage | 2PByte of scratch space |
Speed | 2.097 PFLOPS |
Ranking | TOP500: 37, 2015-11 |
Purpose | Material science, Weather, Climatology, Seismology, Biology, Computational chemistry, Computer science |
Legacy | Ranked 7 on TOP500 when built. [3] |
Website | hpc |
Fermi is a 2.097 -petaFLOPS supercomputer located in Cineca.[4]
History
FERMI is the main HPC computer in CINECA. It was acquired in June 2012 and entered in full production in Aug 8th,the same year. Fermi is the Italian National Tier-0 system for scientific research and is also part of the European HPC infrastructure (PRACE). Its procurement was sponsored by the Ministry of Education, Universities and Research (Italy).
In June 2012, Fermi reached the seventh position on the TOP500 list of fastest supercomputers in the world. [5]
In the Graph500 list of top supercomputers.[6] Fermi reached the fifth position in their benchmark, the system tested at 2,567 gigaTEPS (traversed edges per second).
In the Green500 list of top supercomputers.[7] Fermi reached the fifty-ninth position in their benchmark, the system tested at 2,176.57 MFLOPS/W (Performance per watt).
Specifications
FERMI is a BlueGene/Q[8] system, the last generation of the IBM project for designing supercomputers in the Peta-scale.It is made-up of 10 racks, two MidPlanes each, for a total of 10.240 compute nodes and 163.840 cores.
- Each Compute Card ("compute node") features an IBM PowerA2 chip with 16 cores working at a frequency of 1.6 GHz, 16 GB of RAM and the network connections. A total of 32 compute nodes are plugged into a so-called Node Card. Then 16 Node Cards are assembled in one MidPlane which is combined with another MidPlane and two I/O drawers to give a rack with a total of 32x32x16 = 16K cores for each rack. On the compute nodes there is a light linux-like kernel (CNK[9] - Compute Node Kernel).
- Compute nodes are disk-less, I/O functionalities are provided by I/O nodes.
- The access is done via ssh through the front-end nodes (or login nodes) at login.fermi.cineca.it. On the login nodes there is a complete RedHat Linux distribution (6.2). Parallel applications have to be cross-compiled on the front-end nodes and can only be executed on the partition defined on the compute nodes.
The CINECA configuration is made of 10 racks as follows:
- 2 racks: 16 I/O nodes per rack (minimum job allocation of 64 nodes - 1024 cores).
- 8 racks: 8 I/O nodes per rack (minimum job allocation of 128 nodes - 2048 cores).
See also
References
- ^ "Consortium of universities". Retrieved 9 March 2016.
- ^ "IBM System Blue Gene Solution Blue Gene/Q Application Development". IBM. Retrieved 9 March 2016.
- ^ "Jun 2012". TOP500. Retrieved 9 March 2016.
- ^ "Nov 2015". TOP500. Retrieved 9 March 2016.
- ^ "FERMI". TOP500. Retrieved 9 March 2016.
- ^ "The Graph 500 List: November 2015". Graph 500. Retrieved 9 March 2016.
- ^ "The Green 500 List: November 2015". Graph 500. Retrieved 9 March 2016.
- ^ "Blue Gene". Wikipedia, the free encyclopedia.
- ^ "CNK operating system". Wikipedia, the free encyclopedia.