Jump to content

History of supercomputing: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Line 32: Line 32:


[[File:Paragon XP-E - mesh.jpg|thumb|left|180px|Rear of the [[Intel Paragon|Paragon]] cabinet showing the bus bars and mesh routers]]
[[File:Paragon XP-E - mesh.jpg|thumb|left|180px|Rear of the [[Intel Paragon|Paragon]] cabinet showing the bus bars and mesh routers]]
The [[SX-3 supercomputer|SX-3/44R]] was announced by [[NEC Corporation]] in l989 and a year later earned the fastest in the world title with a 4 processor model.<ref>''Computing methods in applied sciences and engineering'' by R. Glowinski, A. Lichnewsky ISBN 0898712645 page 353-360</ref> However, Fujitsu's [[Numerical Wind Tunnel]] supercomputer used 166 vector processors to gain the top spot in 1994. It had a peak speed of 1.7 gigaflops per processor.<ref>[http://www.netlib.org/benchmark/top500/reports/report94/main.html TOP500 Annual Report 1994.]</ref><ref>{{Cite conference
The [[SX-3 supercomputer|SX-3/44R]] was announced by [[NEC Corporation]] in 1989 and a year later earned the fastest in the world title with a 4 processor model.<ref>''Computing methods in applied sciences and engineering'' by R. Glowinski, A. Lichnewsky ISBN 0898712645 page 353-360</ref> However, Fujitsu's [[Numerical Wind Tunnel]] supercomputer used 166 vector processors to gain the top spot in 1994. It had a peak speed of 1.7 gigaflops per processor.<ref>[http://www.netlib.org/benchmark/top500/reports/report94/main.html TOP500 Annual Report 1994.]</ref><ref>{{Cite conference
|author=N. Hirose and M. Fukuda
|author=N. Hirose and M. Fukuda
|year=1997
|year=1997

Revision as of 12:12, 19 July 2011

A Cray-1 supercomputer preserved at the Deutsches Museum

The history of supercomputing goes back to the 1960s when a series of computers at Control Data Corporation (CDC) were designed by Seymour Cray to use innovative designs and parallelism to achieve superior computational peak performance.[1] The CDC 6600, released in 1964, is generally considered the first supercomputer.[2][3]

In the 1970s, Cray formed his own company and using new approaches to machine architecture produced supercomputers which dominated the field until the end of the 1980s.

While the supercomputers of the 1980s used only a few processors, in the 1990s, machines with thousands of processors began to appear both in the United States and in Japan, setting new computational performance records.

By the end of the 20th century, massively parallel supercomputers with thousands of "off-the-shelf" processors similar to those found in personal computers were constructed and broke through the teraflop computational barrier.

Progress in first decade of the 21st century was dramatic and supercomputers with over 60,000 processors appeared, and reached petaflop performance levels.

The beginnings: CDC in the 1960s

In 1957 a group of engineers left Sperry Corporation to form Control Data Corporation (CDC) in Minneapolis, MN. Seymour Cray left Sperry a year later to join his colleagues at CDC.[1] In 1960 Cray completed the CDC 1604, the first solid state computer, and the fastest computer in the world at a time when vacuum tubes were found in most large computers.[4]

The CDC 6600 with the system console

Around 1960 Cray decided to design a computer that would be the fastest in the world by a large margin. After four years of experimentation along with Jim Thornton, and Dean Roush and about 30 other engineers Cray completed the CDC 6600 in 1964. Given that the 6600 outran all computers of the time by about 10 times, it was dubbed a supercomputer and defined the supercomputing market when one hundred computers were sold at $8 million each.[5][4]

The 6600 gained speed by "farming out" work to peripheral computing elements, freeing the CPU (Central Processing Unit) to process actual data. The Minnesota FORTRAN compiler for the machine was developed by Liddiard and Mundstock at the University of Minnesota and with it the 6600 could sustain 500 kilo-FLOPS on standard mathematical operations.[6] In 1968 Cray completed the CDC 7600, again the fastest computer in the world.[4] At 36 MHz, the 7600 had about three and a half times the clock speed of the 6600, but ran significantly faster due to other technical innovations.

Cray left CDC in 1972 to form his own company.[4] Two years after his departure CDC delivered the STAR-100 which at 100 megaflops was three times the speed of the 7600. Along with the Texas Instruments ASC, the STAR-100 was one of the first machines to use vector processing - the idea having been inspired around 1964 by the APL programming language.[7][8]

The Cray era: mid-1970s and 1980s

A liquid cooled Cray-2 supercomputer

Four years after leaving CDC, Cray delivered the 80 MHz Cray 1 in 1976, and it become one of the most successful supercomputers in history.[8][9] The Cray 1 was a vector processor which introduced a number of innovations such as chaining in which scalar and vector registers generate interim results which can be used immediately, without additional memory references which reduce computational speed.[10] The Cray X-MP (designed by Steve Chen) was released in 1982 as a 105 MHz shared-memory parallel vector processor with better chaining support and multiple memory pipelines. All three floating point pipelines on the XMP could operate simultaneously.[10]

The Cray-2 released in 1985 was an 8 processor liquid cooled computer and Fluorinert was pumped through it as it operated. It performed at 1.9 gigaflops and was the world's fastest until 1990 when ETA-10G from CDC overtook it. The Cray 2 was a totally new design and did not use chaining and had a high memory latency, but used much deep pipelining and was ideal for problems that required large amounts of memory.[10]

The Cray Y-MP, also designed by Steve Chen, was released in 1988 as an improvement of the XMP and could have eight vector processors at 167 MHz with a peak performance of 333 megaflops per processor.[10] In the late 1980s, Cray's experiment on the use of gallium arsenide semiconductors in the Cray-3 did not succeed. Cray began to work on a massively parallel computer in the early 1990s, but died in a car accident in 1996 before it could be completed.[9]

Massive processing: the 1990s

The Cray-2 which set the frontiers of supercomputing in the mid to late 1980s had only 8 processors. In the 1990s, supercomputers with thousands of processors began to appear. Another development at the end of the 1980s was the arrival of Japanese supercomputers, some of which were modeled after the Cray-1.

Rear of the Paragon cabinet showing the bus bars and mesh routers

The SX-3/44R was announced by NEC Corporation in 1989 and a year later earned the fastest in the world title with a 4 processor model.[11] However, Fujitsu's Numerical Wind Tunnel supercomputer used 166 vector processors to gain the top spot in 1994. It had a peak speed of 1.7 gigaflops per processor.[12][13] The Hitachi SR2201 on the other obtained a peak performance of 600 gigaflops in 1996 by using 2048 processors connected via a fast three dimensional crossbar network.[14][15][16]

In the same timeframe the Intel Paragon could have 1000 to 4000 i850 processors in various configurations, and was ranked the fastest in the world in 1993. The Paragon was a MIMD machine which connected processors via a high speed two dimensional mesh, allowing processes to execute on separate nodes; communicating via the Message Passing Interface.[17]

The Paragon architecture soon led to the Intel ASCI Red supercomputer which held the top supercomputing spot to the end of the 20th century as part of the Advanced Simulation and Computing Initiative. This was also a mesh-based MIMD massively-parallel system with over 9,000 compute nodes and well over 12 terabytes of disk storage, but used off-the-shelf Pentium Pro processors that could be found in everyday personal computers. ASCI Red was the first system ever to break through the 1 teraflop barrier on the MP-Linpack benchmark in 1996; eventually reaching 2 teraflops.[18]

Petaflop computing in the 21st century

A Blue Gene/P supercomputer at Argonne National Laboratory

Significant progress was made in the first decade of the 21st century and by 2004 the Earth Simulator supercomputer built by NEC at the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) reached 131 teraflops, using proprietary vector processing chips.[19]

The IBM Blue Gene supercomputer architecture found widespread use in the early part of the 21st century, and 27 of the computers on the TOP500 list used that architecture. The Blue Gene approach is somewhat different in that it trades processor speed for low power consumption so that a larger number of processors can be used at air cooled temperatures. It can use over 60,000 processors, with 2048 processors "per rack", and connects them via a three-dimensional torus interconnect.[20]

Progress in China has been rapid, in that China placed 51st on the TOP500 list in June 2003, then 14th in November 2003 and 10th in June 2004 and then 5th during 2005, before gaining the top spot in 2010 with the 2.5 petaflop Tianhe-I supercomputer.[21][22]

In July 2011 the 8.1 petaflop Japanese K computer became the fastest in the world using over 60,000 commercial scalar SPARC64 VIIIfx processors housed in over 600 cabinets. The fact that K computer is over 60 times faster than the Earth Simulator, and that the Earth Simulator ranks as the 68th system in the world 7 years after holding the top spot demonstrates both the rapid increase in top performance and the widespread growth of supercomputing technology worldwide.[23][24][25]

Historical TOP500 table

This is a list of the computers which appeared at the top of the Top500 list,[26] and the "Peak speed" is given as the "Rmax" rating.

Year Supercomputer Peak speed
(Rmax)
Location
1964 CDC 6600 3 MFLOPS AEC-Lawrence Livermore National Laboratory, California, USA
1969 CDC 7600 36 MFLOPS
1974 CDC STAR-100 100 MFLOPS
1975 Burroughs ILLIAC IV 150 MFLOPS NASA Ames Research Center, California, USA
1976 Cray-1 250 MFLOPS Energy Research and Development Administration (ERDA)
Los Alamos National Laboratory, New Mexico, USA (80+ sold worldwide)
1981 CDC Cyber 205 400 MFLOPS (~40 systems worldwide)
1983 Hitachi HITAC S-810 630 MFLOPS University of Tokyo, Japan
1983 Cray X-MP/4 941 MFLOPS U.S. Department of Energy (DoE)
Los Alamos National Laboratory; Lawrence Livermore National Laboratory; Battelle; Boeing
1983 NEC SX-2 1.3 GFLOPS NEC Fuchu Plant, Fuchū, Tokyo, Japan
1985 Cray-2/8 3.9 GFLOPS DoE-Lawrence Livermore National Laboratory, California, USA
1989 ETA10-G/8 10.3 GFLOPS Florida State University, Florida, USA
1990 Anritsu QCDPAX 14 GFLOPS University of Tsukuba, Tsukuba, Japan
1990 NEC SX-3/44R 23.2 GFLOPS NEC Fuchu Plant, Fuchū, Tokyo, Japan
1991 APE 100 100 GFLOPS INFN, Rome, Italy
1993 Fujitsu Numerical Wind Tunnel 124.50 GFLOPS National Aerospace Laboratory, Tokyo, Japan
1993 Intel Paragon XP/S 140 143.40 GFLOPS DoE-Sandia National Laboratories, New Mexico, USA
1994 Fujitsu Numerical Wind Tunnel 170.40 GFLOPS National Aerospace Laboratory, Tokyo, Japan
1996 Hitachi SR2201/1024 220.4 GFLOPS University of Tokyo, Japan
Hitachi CP-PACS/2048 368.2 GFLOPS University of Tsukuba, Tsukuba, Japan
1997 Intel ASCI Red/9152 1.338 TFLOPS DoE-Sandia National Laboratories, New Mexico, USA
1999 Intel ASCI Red/9632 2.3796 TFLOPS
2000 IBM ASCI White 7.226 TFLOPS DoE-Lawrence Livermore National Laboratory, California, USA
2002 NEC Earth Simulator 35.86 TFLOPS Earth Simulator Center, Yokohama, Japan
2004 IBM Blue Gene/L 70.72 TFLOPS DoE/IBM Rochester, Minnesota, USA
2005 136.8 TFLOPS DoE/U.S. National Nuclear Security Administration,
Lawrence Livermore National Laboratory, California, USA
280.6 TFLOPS
2007 478.2 TFLOPS
2008 IBM Roadrunner 1.026 PFLOPS DoE-Los Alamos National Laboratory, New Mexico, USA
1.105 PFLOPS
2009 Cray Jaguar 1.759 PFLOPS DoE-Oak Ridge National Laboratory, Tennessee, USA
2010 Tianhe-IA 2.566 PFLOPS National Supercomputing Center, Tianjin, China
2011 Fujitsu K computer 8.162 PFLOPS RIKEN, Kobe, Japan

See also

References

  1. ^ a b Hardware software co-design of a multimedia SOC platform by Sao-Jie Chen, Guang-Huei Lin, Pao-Ann Hsiung, Yu-Hen Hu 2009 ISBN pages 70-72
  2. ^ History of computing in education by John Impagliazzo, John A. N. Lee 2004 ISBN 1402081359 page 172 [1]
  3. ^ The American Midwest: an interpretive encyclopedia by Richard Sisson, Christian K. Zacher 2006 ISBN 0253348862 page 1489 [2]
  4. ^ a b c d Wisconsin Biographical Dictionary by Caryn Hannan 2008 ISBN 1878592637 pages 83-84 [3]
  5. ^ A history of modern computing by Paul E. Ceruzzi 2003 ISBN 9780262532037 page 161 [4]
  6. ^ Frisch, Michael (Dec 1972). "Remarks on Algorithms". Communications of the ACM 15 (12): 1074.
  7. ^ An Introduction to high-performance scientific computing by Lloyd Dudley Fosdick 1996 ISBN 0262061813 page 418
  8. ^ a b Readings in computer architecture by Mark Donald Hill, Norman Paul Jouppi, Gurindar Sohi 1999 ISBN 9781558605398 page 41-48
  9. ^ a b Milestones in computer science and information technology by Edwin D. Reilly 2003 ISBN 1573565210 page 65
  10. ^ a b c d Parallel computing for real-time signal processing and control by M. O. Tokhi, Mohammad Alamgir Hossain 2003 ISBN 9781852335991 pages 201-202
  11. ^ Computing methods in applied sciences and engineering by R. Glowinski, A. Lichnewsky ISBN 0898712645 page 353-360
  12. ^ TOP500 Annual Report 1994.
  13. ^ N. Hirose and M. Fukuda (1997). Numerical Wind Tunnel (NWT) and CFD Research at National Aerospace Laboratory. Proceedings of HPC-Asia '97. IEEE Computer Society. doi:10.1109/HPC.1997.592130.
  14. ^ H. Fujii, Y. Yasuda, H. Akashi, Y. Inagami, M. Koga, O. Ishihara, M. Kashiyama, H. Wada, T. Sumimoto, Architecture and performance of the Hitachi SR2201 massively parallel processor system, Proceedings of 11th International Parallel Processing Symposium, April 1997, Pages 233-241.
  15. ^ Y. Iwasaki, The CP-PACS project, Nuclear Physics B - Proceedings Supplements, Volume 60, Issues 1-2, January 1998, Pages 246-254.
  16. ^ A.J. van der Steen, Overview of recent supercomputers, Publication of the NCF, Stichting Nationale Computer Faciliteiten, the Netherlands, January 1997.
  17. ^ Scalable input/output: achieving system balance by Daniel A. Reed 2003 ISBN 9780262681421 page 182
  18. ^ Algorithms for parallel processing, Volume 105 by Michael T. Heath 1998 ISBN 0387986804 page 323
  19. ^ Sato, Tetsuya (2004). "The Earth Simulator: Roles and Impacts". Nuclear Physics B Proceedings Supplements. 129: 102. doi:10.1016/S0920-5632(03)02511-8.
  20. ^ Euro-Par 2005 parallel processing: 11th International Euro-Par Conference edited by José Cardoso Cunha, Pedro D. Medeiros 2005 ISBN 9783540287001 pages 560-567
  21. ^ Graham, Susan L.; Snir, Marc; Patterson, Cynthia A. (2005). Getting up to speed: the future of supercomputing. p. 188. ISBN 0309095026.
  22. ^ New York Times
  23. ^ "Japanese supercomputer 'K' is world's fastest". The Telegraph. 20 June 2011. Retrieved 20 June 2011.
  24. ^ "Japanese 'K' Computer Is Ranked Most Powerful". The New York Times. 20 June 2011. Retrieved 20 June 2011.
  25. ^ "Supercomputer "K computer" Takes First Place in World". Fujitsu. Retrieved 20 June 2011.
  26. ^ Intel brochure - 11/91. "Directory page for Top500 lists. Result for each list since June 1993". Top500.org. Retrieved 2010-10-31.{{cite web}}: CS1 maint: numeric names: authors list (link)