Recursive self-improvement

From Wikipedia, the free encyclopedia
Jump to: navigation, search
This article is about the artificial intelligence idiom. For the How to Destroy Angels song, see Welcome Oblivion.

Recursive self-improvement is the speculative ability of a strong artificial intelligence computer program to program its own software, recursively.

This is sometimes also referred to as Seed AI because if an AI was created such that its engineering capabilities matched or surpassed those of its human creators, it would have the potential to autonomously improve the design of its constituent software and hardware. Having undergone these improvements, it would then be better able to find ways of optimizing its structure and improving its abilities further. It is speculated that over many iterations, such an AI would far surpass human cognitive abilities.

History[edit]

This notion of an "intelligence explosion" was first described thus by Good (1965), who speculated on the effects of superhuman machines:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

Compilers[edit]

A limited example is that program language compilers are often used to compile themselves. As compilers become more optimized, they can re-compile themselves and so be faster at compiling.

However, they cannot then produce faster code and so this can only provide a very limited one step self-improvement. Existing optimizers can transform code into a functionally equivalent, more efficient form, but cannot identify the intent of an algorithm and rewrite it for more effective results. The optimized version of a given compiler may compile faster, but it cannot compile better. That is, an optimized version of a compiler will never spot new optimization tricks that earlier versions failed to see or innovate new ways of improving its own program.

Seed AI must be able to understand the purpose behind the various elements of its design, and design entirely new modules that will make it genuinely more intelligent and more effective in fulfilling its purpose.

Hard vs. soft takeoff[edit]

A "hard takeoff" refers to the scenario in which a single AI project rapidly self-improves, on a timescale of a few years or even days. A "soft takeoff" refers to a longer-term process of integrating gradual AI improvements into society more broadly.[1] Eliezer Yudkowsky and Robin Hanson have extensively debated these positions, with Yudkowsky arguing for the realistic possibility of hard takeoff, while Hanson believes its probability is less than 1%.[2]

Ramez Naam argues against a hard takeoff by pointing out that we already see recursive self-improvement by superintelligences, such as corporations. For instance, Intel has "the collective brainpower of tens of thousands of humans and probably millions of CPU cores to.. design better CPUs!" However, this has not led to a hard takeoff; rather, it has led to a soft takeoff in the form of Moore's law.[3] Naam further points out that the computational complexity of higher intelligence may be much greater than linear, such that "creating a mind of intelligence 2 is probably more than twice as hard as creating a mind of intelligence 1."[4] William Hertling replies that while he agrees there won't be a hard takeoff, he expects that Moore's law and the ability to copy computers may still thoroughly change the world sooner than most people are expecting. He suggests that when we postpone the predicted arrival date of these changes, "we're less likely as a society to examine both AI progress and take steps to reduce the risks of AGI."[5]

J. Storrs Hall believes that "many of the more commonly seen scenarios for overnight hard takeoff are circular – they seem to assume hyperhuman capabilities at the starting point of the self-improvement process" in order for an AI to be able to make the dramatic, domain-general improvements required for takeoff. Hall suggests that rather than recursively self-improving its hardware, software, and infrastructure all on its own, a fledgling AI would be better off specializing in one area where it was most effective and then buying the remaining components on the marketplace, because the quality of products on the marketplace continually improves, and the AI would have a hard time keeping up with the cutting-edge technology used by the rest of the world.[6]

Organizations[edit]

Creating seed AI is the goal of several organizations. The Machine Intelligence Research Institute is the most prominent of those explicitly working to create seed AI[citation needed] and ensure its safety.[7] Others include the Artificial General Intelligence Research Institute, creator of the Novamente AI engine, Adaptive Artificial Intelligence Incorporated, Texai.org, and Consolidated Robotics.

See also[edit]

References[edit]

  1. ^ "AI takeoff". Retrieved 16 May 2014. 
  2. ^ Hanson, Robin; Eliezer Yudkowsky (2013). The Hanson-Yudkowsky AI-Foom Debate. Machine Intelligence Research Institute. Retrieved 16 May 2014. 
  3. ^ Naam, Ramez (2014). "The Singularity Is Further Than It Appears". Retrieved 16 May 2014. 
  4. ^ Naam, Ramez (2014). "Why AIs Won't Ascend in the Blink of an Eye - Some Math". Retrieved 16 May 2014. 
  5. ^ Hertling, William (2014). "The Singularity is Still Closer than it Appears". Retrieved 16 May 2014. 
  6. ^ Hall, J. Storrs (2008). "Engineering Utopia". Artificial General Intelligence, 2008: Proceedings of the First AGI Conference: 460–467. Retrieved 16 May 2014. 
  7. ^ "Machine Intelligence Research Insitute - Research". Retrieved 2014-05-01. 

External links[edit]