From Wikipedia, the free encyclopedia
Jump to: navigation, search

A superintelligence, hyperintelligence or superhuman intelligence is a hypothetical entity which possesses intelligence surpassing that of the brightest human minds. Superintelligence may also refer to the specific form or degree of intelligence possessed by such an entity. The possibility of superhuman intelligence is frequently discussed in the context of artificial intelligence. Increasing natural intelligence through genetic engineering or brain-computer interfacing is a common motif in futurology and science fiction. Collective intelligence is often regarded as a pathway to — or an existing realization of the concept of — superintelligence.


Superintelligence is defined as an “intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.”[1] The definition does not specify the means by which superintelligence could be achieved: whether biological, technological, or some combination. Neither does it specify whether or not superintelligence requires self-consciousness or experience-driven perception.

The transhumanist movement distinguishes between “weak” and “strong” superintelligence. Weak superintelligence operates on the same qualitative level of human brains, but much faster. Strong superintelligence operates on a superior level; a superintelligent brain is superior to humans in the same way that a human brain is considered qualitatively superior to a dog's.[2]

In plain language, profoundly gifted people or savants are called superintelligent. Clever search algorithms or the Semantic Web are sometimes considered to be superintelligent. While these outstanding people or machines have an advantage over average human brains, they don't qualify as superintelligence, as they don't have superior abilities in cognition or creativity. The scientific community is heterogeneous, not a singular entity, and cannot be called a superintelligence.[citation needed]


In Transhumanism, different currents disagree on the way to create a superintelligence. Roughly three different paths are outlined:

  • An artificial general intelligence, which can learn and improve itself, could after several self-improvements achieve superintelligence.[3]
  • Biological enhancements (breeding, genetic manipulation, or medical treatments) could in several iterations induce the state of superintelligence or other superhuman traits.
  • Cybernetic enhancements could increase the capabilities of the human mind considerably, at least in terms of speed and memory. Technical realization of neural human–computer interfaces have begun in the field of prosthetics.[4] Real enhancements of a human brain are still unimplemented.

Evaluative diversity[edit]

Main article: Evaluative diversity

Chris Santos-Lang pointed out that intelligence historically evolved in interdependent evaluative ecosystems, and that the rise of a single lineage of intelligence might threaten its own ecosystem the way invasive species threaten a biological ecosystem. He advocated for monitoring of evaluative diversity as we monitor biodiversity.[5] Nick Bostrom, author of Superintelligence: Paths, Dangers, Strategies,[6] acknowledged that evaluative diversity is likely part of the solution to the problem of "moral uncertainty."[7]

See also[edit]


  1. ^ Bostrom, Nick (2006). "How long before superintelligence?". Linguistic and Philosophical Investigations 5 (1): 11–30. 
  2. ^ Transhumanist Conference "Transvision": What is superintelligence?
  3. ^ Michael Anissimov 2003: Forecasting Superintelligence: the Technological Singularity
  4. ^ Paul Sajda et al.: In a Blink of an Eye and a Switch of a Transistor: Cortically Coupled Computer Vision
  5. ^ Santos-Lang, Christopher (2014). "Our responsibility to manage evaluative diversity". ACM SIGCAS Computers & Society 44 (2): 16–19. doi:10.1145/2656870.2656874. ISSN 0095-2737. 
  6. ^ Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press. ISBN 978-0199678112. 
  7. ^ Bostrom, Nick (January 1, 2009). "Moral uncertainty – towards a solution?". Overcoming Bias. Retrieved July 24, 2014. 

External links[edit]