Superintelligence

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Silence (talk | contribs) at 19:26, 22 September 2014 (→‎Artificial superintelligence). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

A superintelligence, hyperintelligence, or superhuman intelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. ‘’Superintelligence’’ may also refer to the form or degree of intelligence possessed by such an agent.

Technological forecasters and researchers disagree about when human intelligence is likely to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification.

Experts in AI and biotechnology do not expect any of these technologies to produce a superintelligence in the very near future. A number of scientists and forecasters argue for prioritizing early research into the possible benefits and risks of human and machine cognitive enhancement, because of the potential social impact of such technologies.[1]

Definition

Summarizing the views of intelligence researchers, Linda Gottfredson writes:

Intelligence is a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience. It is not merely book-learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings "catching on," "making sense" of things, or "figuring out" what to do.[2]

There is no agreed-upon way to measure intelligence in all varieties of agent. Intelligence quotient (IQ) tests are used to measure normal human variation in g factor, a general skill at cognitive tasks. In machines, one of the oldest operationalizations of intelligence is the Turing test, which judges a system’s intelligence by how well it can fool a human interrogator into thinking it is human. However, IQ and Turing tests both focus on ordinary human ability levels; neither extends to provide a definition or measure of superhuman intelligence.

Shane Legg and Marcus Hutter make use of a more abstract definition of intelligence, as “an agent’s ability to achieve goals in a wide range of environments”.[3] On this view, matching or surpassing human-level intelligence is a matter of being able to complete tasks and solve problems in many different domains, regardless of how or why one goes about doing so. "Intelligence is not really the ability to do anything in particular, rather it is a very general ability that affects many kinds of performance."[4] Legg and Hutter argue that this approach makes it possible to define measures of intelligence that are less narrow and human-specific, such as their 'universal intelligence measure', which culminates in the formal agent AIXI.[5]

Oxford futurist Nick Bostrom defines superintelligence as "an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills."[6] The program Fritz falls short of superintelligence even though it is much better than humans at chess, because Fritz cannot outperform humans in other tasks.[7] Following Hutter and Legg, Bostrom treats superintelligence as general dominance at goal-oriented behavior, leaving open whether an artificial or human superintelligence would possess capacities such as intentionality (cf. the Chinese room argument) or first-person consciousness (cf. the hard problem of consciousness).

Feasibility

Whether superhuman intelligence is possible depends on the feasibility of the particular methods for developing it (see next section), but also on whether humans fall short on various cognitive metrics, such as computational efficiency and speed. Large deficiencies in human thought suggest that more powerful reasoning systems are physically possible.

Bostrom writes, “Biological neurons operate at a peak speed of about 200 Hz, a full seven orders of magnitude slower than a modern microprocessor (~2 GHz).”[8] Moreover, neurons transmit spike signals across axons at no greater than 120 m/s, "whereas existing electronic processing cores can communicate optically at the speed of light". Thus, the simplest example of a superintelligence may be an emulated human mind that's run on much faster hardware than the brain. A human-like reasoner that could think millions of times faster than current humans would have a dominant advantage in most reasoning tasks, particularly ones that require haste or long strings of actions.

Computational resources place another limit on present-day human cognition. A non-human (or modified human) brain could become much larger, like many supercomputers. Bostrom also raises the possibility of collective superintelligence: a large enough number of separate reasoning systems, if they communicated and coordinated well enough, could act in aggregate with far greater capabilities than any sub-agent.

There may also be ways to qualitatively improve on human reasoning and decision-making. Humans appear to differ from chimpanzees in the ways we think more than we differ in brain size or speed.[9] Humans outperform non-human animals in large part because of new or enhanced reasoning capacities, such as long-term planning and language use. (See evolution of human intelligence and primate cognition.) If there are other possible improvements to human reasoning that would have a similarly large impact, this makes it likelier that an agent can be built that outperforms humans in the same fashion humans outperform chimpanzees.[10] All of the above advantages hold for artificial superintelligence, but it is not clear how many hold for biological superintelligence. Physiological constraints limit the speed and size of biological brains in many ways that are inapplicable to machine intelligence. As such, writers on superintelligence have devoted much more attention to superintelligent AI scenarios.[11]

Superintelligence scenarios

Biological superintelligence

Carl Sagan suggests that the advent of Caesarian sections and in vitro fertilization may permit humans to evolve larger heads, resulting in improvements via natural selection in the heritable component of human intelligence.[12] In “Our Fragile Intellect,” Gerald Crabtree argues that decreased selection pressure is resulting in a slow, centuries-long reduction in human intelligence, and that this process instead is likely to continue into the future. There is no scientific consensus concerning either possibility, and in both cases the biological change would be slow, especially relative to rates of cultural change.

Selective breeding and genetic engineering could improve human intelligence more rapidly. Bostrom writes that if we come to understand the genetic component of intelligence, pre-implantation genetic diagnosis could be used to select for embryos with as much as 4 points of IQ gain (if one embryo is selected out of two), or with larger gains (e.g., up to 24.3 IQ points gained if one embryo is selected out of 1000). If this process is iterated over many generations, the gains could be an order of magnitude greater. Bostrom suggests that deriving new gametes from embryonic stem cells could be used to iterate the selection process very rapidly.[13] A well-organized society of high-intelligence humans of this sort could potentially achieve collective superintelligence.[14]

Alternatively, collective intelligence might be constructible by better organizing humans at present levels of individual intelligence. A number of writers have suggested that human civilization, or some aspect of it (e.g., the Internet, or the economy), is coming to function like a global brain with capacities far exceeding its component agents. If this systems-based superintelligence relies heavily on artificial components, however, it may qualify as an AI rather than as a biology-based superorganism.[15]

A final method of intelligence amplification would be to directly enhance individual humans, as opposed to enhancing their social or reproductive dynamics. This could be achieved using nootropics, somatic gene therapy, or brain–computer interfaces. However, Bostrom expresses skepticism about the scalability of the first two approaches, and argues that designing a superintelligent cyborg interface is an AI-complete problem.[16]

Artificial superintelligence

Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on timescales. At the 2006 AI@50 conference, 18% of attendees reported expecting machines to be able "to simulate learning and every other aspect of human intelligence" by 2056; 41% of attendees expected this to happen sometime after 2056; and 41% expected machines to never reach that milestone. In a survey of the 100 most cited authors in AI (as of May 2013, according to Microsoft Academic Search), the median year by which respondents expected machines "that can carry out most human professions at least as well as a typical human" (assuming no global catastrophe occurs) with 10% confidence is 2024 (mean 2034, st. dev. 33 years), with 50% confidence is 2050 (mean 2072, st. dev. 110 years), and with 90% confidence is 2070 (mean 2168, st. dev. 342 years). These estimates exclude the 1.2% of respondents who said no year would ever reach 10% confidence, the 4.1% who said 'never' for 50% confidence, and the 16.5% who said 'never' for 90% confidence. Respondents assigned a median 50% probability to the possibility that machine superintelligence will be invented within 30 years of the invention of approximately human-level machine intelligence.[17]

Philosopher David Chalmers argues that generally intelligent AI — artificial general intelligence — is a very likely path to superhuman intelligence. Chalmers breaks this claim down into an argument that AI can achieve equivalence to human intelligence, that it can be extended to surpass human intelligence, and that it be further amplified to completely dominate humans across arbitrary tasks.[18]

Concerning human-level equivalence, Chalmers argues that the human brain is a mechanical system, and therefore ought to be emulable by synthetic materials.[19] He also notes that human intelligence was able to biologically evolve, making it more likely that human engineers will be able to recapitulate this invention. Evolutionary algorithms in particular should be able to produce human-level AI.[20] Concerning intelligence extension and amplification, Chalmers argues that new AI technologies can generally be improved on, and that this is particularly likely when the invention can assist in designing new technologies.[21]

Citations

  1. ^ Legg 2008, pp. 135–137.
  2. ^ Gottfredson, Linda (1997). "Mainstream Science on Intelligence". Intelligence. 24 (1): 13. {{cite journal}}: |access-date= requires |url= (help)
  3. ^ Legg, Shane; Hutter, Marcus (2007). "A Collection of Definitions of Intelligence". Frontiers in Artificial Intelligence and Applications. 157: 9. Retrieved September 19, 2014.
  4. ^ Legg 2008, p. 8.
  5. ^ Legg, Shane; Hutter, Marcus (2005). "A Universal Measure of Intelligence for Artificial Agents". Proceedings of the 21st International Joint Conference on Artificial Intelligence. Retrieved September 19, 2014.
  6. ^ Bostrom, Nick (2006). "How long before superintelligence?". Linguistic and Philosophical Investigations. 5 (1): 11–30.
  7. ^ Bostrom 2014, p. 22.
  8. ^ Bostrom 2014, p. 59.
  9. ^ Yudkowsky, Eliezer (2013). Intelligence Explosion Microeconomics (PDF) (Technical report). Machine Intelligence Research Institute. p. 35. 2013-1.
  10. ^ Bostrom 2014, pp. 56–57.
  11. ^ Bostrom 2014, pp. 52, 59–61.
  12. ^ Sagan, Carl (1977). The Dragons of Eden. Random House.
  13. ^ Bostrom 2014, pp. 37–39.
  14. ^ Bostrom 2014, p. 39.
  15. ^ Bostrom 2014, pp. 48–49.
  16. ^ Bostrom 2014, pp. 36–37, 42, 47.
  17. ^ Müller & Bostrom 2014, pp. 3–4, 6, 9–12.
  18. ^ Chalmers 2010, p. 7.
  19. ^ Chalmers 2010, p. 7-9.
  20. ^ Chalmers 2010, p. 10-11.
  21. ^ Chalmers 2010, p. 11-13.

Bibliography