Recursive Self Improvement
This article refers to the Artifical Intelligence idiom, not the song by How to Destroy Angels.
Recursive Self Improvement is the ability of strong artificial intelligence to program its own software, recursively. This is sometimes also referred to as Seed AI because if an AI was created such that its engineering capabilities matched or surpassed those of its human creators, it would have the potential to autonomously improve the design of its constituent software and hardware. Having undergone these improvements, it would then be better able to find ways of optimizing its structure and improving its abilities further. It is speculated that over many iterations, such an AI would far surpass human cognitive abilities.Yudkowsky (2013) The successful implementation of seed AI would result in a technological singularity.
This notion of an "intelligence explosion" was first described thus by Good (1965), who speculated on the effects of superhuman machines:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.
However, they cannot then produce faster code and so this can only provide a very limited one step self-improvement. Existing optimizers can transform code into a functionally equivalent, more efficient form, but cannot identify the intent of an algorithm and rewrite it for more effective results. The optimized version of a given compiler may compile faster, but it cannot compile better. That is, an optimized version of a compiler will never spot new optimization tricks that earlier versions failed to see or innovate new ways of improving its own program.
Seed AI must be able to understand the purpose behind the various elements of its design, and design entirely new modules that will make it genuinely more intelligent and more effective in fulfilling its purpose.
There is an inherent threat posed by a program that can self-edit it's own code. Such a machine could ultimately become capable of overwriting any limits originally put in place to control it's morality. Thus, such a program could go from benevolent to destructive, bringing any sub-processes with it.
Creating seed AI is the goal of several organizations. The Singularity Institute for Artificial Intelligence is the most prominent of those explicitly working to create seed AI. Others include the Artificial General Intelligence Research Institute, creator of the Novamente AI engine, Adaptive Artificial Intelligence Incorporated, Texai.org, and Consolidated Robotics.
See also 
- Evolutionary programming
- Eliezer Yudkowsky
- Friendly AI — a theory related to Seed AI.
- General intelligence
- Simulated reality
- Singularity Institute for Artificial Intelligence — a non-profit foundation promoting the Seed AI and related theories.
- Singularitarianism — a term given to those who promote the Seed AI and related theories.
- Synthetic intelligence
- Technological singularity
- Artificial life
- Good, I. J. (1965), "Speculations Concerning the First Ultraintelligent Machine", in Franz L. Alt and Morris Rubinoff, Advances in Computers (Academic Press) 6: 31–88, archived from the original on 2001-05-27, retrieved 2007-08-07
- Yudkowsky, Eliezer (2013). "General Intelligence and Seed AI". Retrieved 8 February 2013.[dead link]
- Jürgen Schmidhuber's "Gödel machine" architecture
- Adaptive AI Inc. — A2I2 project website
- The Novamente AI Engine — The project page for AGIRI's planned seed AI
- Levels of Organization in General Intelligence — A formal, academic examination of seed AI design principles
- General Intelligence and Seed AI — An explanation of seed AI from the Singularity Institute