Computer performance by orders of magnitude
From Wikipedia, the free encyclopedia
(Redirected from Orders of magnitude (computing))
|This article needs additional citations for verification. (March 2011) (Learn how and when to remove this template message)|
- 1 Deciscale computing (10−1)
- 2 Scale computing (100)
- 3 Decascale computing (101)
- 4 Hectoscale computing (102)
- 5 Kiloscale computing (103)
- 6 Megascale computing (106)
- 7 Gigascale computing (109)
- 8 Terascale computing (1012)
- 9 Petascale computing (1015)
- 10 Exascale computing (1018)
- 11 Zettascale computing (1021)
- 12 Yottascale computing and beyond (>1024)
- 13 See also
- 14 References
- 15 External links
Deciscale computing (10−1)
- 5×10−1 Speed of the average human mental calculation for multiplication using pen and paper
Scale computing (100)
- 1 OP/S the speed of the average human addition calculation using pen and paper
- 1 OP/S the speed of Zuse Z1
- 5 OP/S world record for addition set
Decascale computing (101)
- 6×101 Upper end of serialized human perception computation (light bulbs in the US do not flicker to the human observer)
Hectoscale computing (102)
- 2.2×102 Upper end of serialized human throughput. This is roughly expressed by the lower limit of accurate event placement on small scales of time (The swing of a conductor's arm, the reaction time to lights on a drag strip, etc.)
- 2×102 IBM 602 1946 computer.
Kiloscale computing (103)
- 92×103 Intel 4004 First commercially available full function CPU on a chip, released in 1971
- 500×103 Colossus computer vacuum tube supercomputer 1943
Megascale computing (106)
- 1×106 Motorola 68000 commercial computing 1979
- 1.2×106 IBM 7030 "Stretch" transistorized supercomputer 1961
Gigascale computing (109)
- 1×109 ILLIAC IV 1972 supercomputer does first computational fluid dynamics problems
- 1.354×109 Intel Pentium III commercial computing 1999
- 147.6×109 Intel Core i7-980X Extreme Edition commercial computing 2010
Terascale computing (1012)
- 1.34×1012 Intel ASCI Red 1997 Supercomputer
- 1.344×1012 GeForce GTX 480 in 2010 from NVIDIA at its peak performance
- 4.64×1012 Radeon HD 5970 in 2009 from AMD (under ATI branding) at its peak performance
- 5.152×1012 S2050/S2070 1U GPU Computing System from NVIDIA
- 80×1012 IBM Watson
- 478.2×1012 IBM BlueGene/L 2007 Supercomputer
Petascale computing (1015)
- 1.026×1015 IBM Roadrunner 2009 Supercomputer
- 8.1×1015 Fastest computer system as of 2012 is the Folding@home distributed computing system
- 11.5×1015 Google TPU pod containing 64 second-generation TPUs, May 2017
- 17.17×1015 IBM Sequoia's LINPACK performance, June 2013
- 33.86×1015 Tianhe-2's LINPACK performance, June 2013
- 36.8×1015 Estimated computational power required to simulate a human brain in real time.
- 93.01×1015 Sunway TaihuLight's LINPACK performance, June 2016
Exascale computing (1018)
- 1×1018 It is estimated that the need for exascale computing will become pressing around 2018
- 1.5×1018 Bitcoin network Hash Rate reached 1.5 Exahashes per seconds in mid 2016
Zettascale computing (1021)
- 1×1021 Accurate global weather estimation on the scale of approximately 2 weeks. Assuming Moore's law remains constant, such systems may be feasible around 2030.
A zettascale computer system could generate more single floating point data in one second than was stored by any digital means on Earth in first quarter 2011.
Yottascale computing and beyond (>1024)
- 257.6×1024 Estimated computational power required to simulate 7 billion brains in real time
- 4×1048 Estimated computational power of a Matrioshka brain, where the power source is the Sun, the outermost layer operates at 10 Kelvin, and the constituent parts operate at or near the Landauer limit and draws power at the efficiency of a Carnot engine. Approximate maximum computational power for a Kardashev 2 civilization.
- 5×1058 Estimated power of a galaxy equivalent in luminosity to the Milky way converted into Matrioshka brains. Approximate maximum computational power for a Kardashev 3 civilization.
- Futures studies – study of possible, probable, and preferable futures, including making projections of future technological advances
- History of computing hardware (1960s–present)
- List of emerging technologies – new fields of technology, typically on the cutting edge. Examples include genetics, robotics, and nanotechnology (GNR).
- Artificial intelligence – computer mental abilities, especially those that previously belonged only to humans, such as speech recognition, natural language generation, etc.
- Quantum computing
- Moore's law – observation (not actually a law) that, over the history of computing hardware, the number of transistors on integrated circuits doubles approximately every two years. The law is named after Intel co-founder Gordon E. Moore, who described the trend in his 1965 paper.
- Timeline of computing
- Technological singularity – hypothetical point in the future when computer capacity rivals that of a human brain, enabling the development of strong AI — artificial intelligence at least as smart as a human.
- TOP500 – list of the 500 most powerful (non-distributed) computer systems in the world
- "How many frames per second can the human eye see?". 2004-05-19. Retrieved 2013-02-19.
- Overclock3D - Sandra CPU
- Tony Pearson, IBM Watson - How to build your own "Watson Jr." in your basement, Inside System Storage
- http://top500.org/list/2016/06/ Top500 list, June 2016
- "'Exaflop' Supercomputer Planning Begins". 2008-02-02. Retrieved 2010-01-04.
Through the IAA, scientists plan to conduct the basic research required to create a computer capable of performing a million trillion calculations per second, otherwise known as an exaflop.
- Bitcoin hash rate chart
- DeBenedictis, Erik P. (2005). "Reversible logic for supercomputing". Proceedings of the 2nd conference on Computing frontiers. pp. 391–402. ISBN 1-59593-019-1.
- Moore, Gordon E. (1965). "Cramming more components onto integrated circuits" (PDF). Electronics Magazine. p. 4. Retrieved 2006-11-11.