Jump to content

Friendly artificial intelligence

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Emesee (talk | contribs) at 06:23, 23 December 2008. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

A Friendly Artificial Intelligence or FAI is an artificial intelligence (AI) that has a positive rather than negative effect on humanity. Friendly AI also refers to the field of knowledge required to build such an AI. This term particularly applies to AIs which have the potential to significantly impact humanity, such as those with intelligence comparable to or exceeding that of humans ("superintelligence"; see strong AI and technological singularity.) This specific term was coined by researcher Eliezer Yudkowsky of the Singularity Institute for Artificial Intelligence as a technical term distinct from the everyday meaning of the word "friendly"; however, the concern is much older.

Goals and Definitions of Friendly AI

Many experts have argued that AI systems with goals that are not perfectly identical to or very closely aligned with our own are intrinsically dangerous unless extreme measures are taken to ensure the safety of humanity. Decades ago, Ryszard Michalski, one of the pioneers of Machine Learning, taught his Ph.D. students that any truly alien mind, to include machine minds, was unknowable and therefore dangerous. More recently, Eliezer Yudkowsky has called for the creation of “Friendly AI” to mitigate the existential threat of hostile intelligences. Stephen Omohundro argues that all advanced AI systems will, unless explicitly counteracted, exhibit a number of basic drives/tendencies/desires because of the intrinsic nature of goal-driven systems and that these drives will, “without special precautions”, cause the AI to act in ways that range from the disobedient to the dangerously unethical.

According to the proponents of Friendliness, the goals of future AIs will be more arbitrary and alien than commonly depicted in science fiction and earlier futurist speculation, in which AIs are often anthropomorphised and assumed to share universal human modes of thought. Because AI is not guaranteed to see the "obvious" aspects of morality and sensibility that most humans see so effortlessly, the theory goes, AIs with intelligences or at least physical capabilities greater than our own may concern themselves with endeavours that humans would see as pointless or even laughably bizarre. One example Yudkowsky provides is that of an AI initially designed to solve the Riemann hypothesis, which, upon being upgraded or upgrading itself with superhuman intelligence, tries to develop molecular nanotechnology because it wants to convert all matter in the Solar System into computing material to solve the problem, killing the humans who asked the question. For humans, this would seem ridiculously absurd, but as Friendliness theory stresses, this is only because we evolved to have certain instinctive sensibilities which a robot, not sharing our evolutionary history, may not necessarily comprehend unless we design it to.

Friendliness proponents stress less the danger of superhuman AIs that actively seek to harm humans, but more of AIs that are disastrously indifferent to them. Superintelligent AIs may be harmful to humans if steps are not taken to specifically design them to be benevolent. Doing so effectively is the primary goal of Friendly AI. Designing an AI, whether deliberately or semi-deliberately, without such "Friendliness safeguards", would therefore be seen as highly immoral, approximately equivalent to a parent raising a child with absolutely no regard for whether that child grows up to be a psychopath.

Hugo de Garis is noted for his belief that a major war between the supporters and opponents of intelligent machines, resulting in billions of deaths, is almost inevitable before the end of the 21st century.[2]:234 This prediction has attracted debate and criticism from the AI research community, and some of its more notable members, such as Kevin Warwick, Bill Joy, Ken MacLeod, Ray Kurzweil, Hans Moravec, and Roger Penrose, have voiced their opinions on whether or not this future is likely.

This belief that human goals are so arbitrary derives heavily from modern advances in evolutionary psychology. Friendliness theory claims that most AI speculation is clouded by analogies between AIs and humans, and assumptions that all possible minds must exhibit characteristics that are actually psychological adaptations that exist in humans (and other animals) only because they were once beneficial and perpetuated by natural selection. This idea is expanded on greatly in section two of Yudkowsky's Creating Friendly AI, "Beyond anthropomorphism".

Many supporters of FAI speculate that an AI able to reprogram and improve itself, Seed AI, is likely to create a huge power disparity between itself and statically intelligent human minds; that its ability to enhance itself would very quickly outpace the human ability to exercise any meaningful control over it. While many doubt such scenarios are likely, if they were to occur, it would be important for AI to act benevolently towards humans. As Oxford philosopher Nick Bostrom puts it:

"Basically we should assume that a 'superintelligence' would be able to achieve whatever goals it has. Therefore, it is extremely important that the goals we endow it with, and its entire motivation system, is 'human friendly.'"

It is important to stress that Yudkowsky's Friendliness Theory is very different from ideas relating to the concept that AIs may be made safe by including specifications or strictures into their programming or hardware architecture, often exemplified by Isaac Asimov's Three Laws of Robotics, which would, in principle, force a machine to do nothing which might harm a human, or destroy it if it does attempt to do so. Friendliness Theory rather holds that the inclusion of such laws would be futile, because no matter how such laws are phrased or described, a truly intelligent machine with genuine (human-level or greater) creativity and resourcefulness could potentially design infinitely many ways of circumventing such laws, no matter how broadly or narrowly defined they were, or otherwise how categorically comprehensive they were formulated to be.

Rather, Yudkowsky's Friendliness Theory relates, through the fields of biopsychology, that if a truly intelligent mind feels motivated to carry out some function, the result of which would violate some constraint imposed against it, then given enough time and resources, it will develop methods of defeating all such constraints (as humans have done repeatedly throughout the history of technological civilization). Therefore, the appropriate response to the threat posed by such intelligence, is to attempt to ensure that such intelligent minds specifically feel motivated to not harm other intelligent minds (in any sense of the word "harm"), and to that end will deploy their resources towards devising better methods of keeping them from harm. In this scenario, an AI would be free to murder, injure, or enslave a human being, but it would strongly desire not to do so and would only do so if it judged, according to that same desire, that some vastly greater good to that human or to human beings in general would result (though this particular idea is explored in Asimov's I, Robot stories, via the Zeroth Law). Therefore, an AI designed with Friendliness safeguards would do everything in its power to ensure humans do not come to "harm", and to ensure that any other AIs that are built would also want humans not to come to harm, and to ensure that any upgraded or modified AIs, whether itself or others, would also never want humans to come to harm - it would try to minimize the harm done to all intelligent minds in perpetuity. As Yudkowsky puts it:

"Gandhi does not want to commit murder, and does not want to modify himself to commit murder."

One of the more contentious, recent hypotheses in Friendliness theory is the Coherent Extrapolated Volition model, also developed by Yudkowsky. According to him our coherent extrapolated volition is our choices and the actions we would collectively take if we knew more, thought faster, were more the people we wished we were, and had grown up farther together. Yudkowsky believes a Friendly AI should initially seek to determine the coherent extrapolated volition of humanity, with which it can then alter its goals accordingly. Many other researchers believe, however, that the collective will of humanity will not converge to a single coherent set of goals even if "we knew more, thought faster, were more the people we wished we were, and had grown up farther together."

Requirements for FAI and effective FAI

The requirements for FAI to be effective, both internally, to protect humanity against unintended consequence of the AI in question and externally to protect against other non-FAIs arising from whatever source are:

  1. Friendliness - that an AI feel sympathetic towards humanity and all life, and seek for their best interests
  2. Conservation of Friendliness - that an AI must desire to pass on its value system to all of its offspring and inculcate its values into others of its kind
  3. Intelligence - that an AI be smart enough to see how it might engage in altruistic behaviour to the greatest degree of equality, so that it is not kind to some but more cruel to others as a consequence, and to balance interests effectively
  4. Self-improvement - that an AI feel a sense of longing and striving for improvement both of itself and of all life as part of the consideration of wealth, while respecting and sympathising with the informed choices of lesser intellects not to improve themselves
  5. First mover advantage - the first goal-driven general self-improving AI "wins" in the memetic sense, because it is powerful enough to prevent any other AI emerging, which might compete with its own goals.

Promotion and support

Promoting Friendly AI is one of the primary goals of the Singularity Institute for Artificial Intelligence, along with obtaining funding for, and ultimately creating a seed AI program implementing the ideas of Friendliness theory.

Several notable futurists have voiced support for Friendly AI, including author and inventor Raymond Kurzweil, medical life-extension advocate Aubrey de Grey, and World Transhumanist Association founder Dr. Nick Bostrom of Oxford University.

Criticism

One notable critic of Friendliness theory is Bill Hibbard, author of Super-Intelligent Machines, who considers the theory incomplete. Hibbard writes there should be broader political involvement in the design of AI and AI morality. He also believes that initially seed AI could only be created by powerful private sector interests (a view not shared by Yudkowsky), and that multinational corporations and the like would have no incentive to implement Friendliness theory.

In his criticism of the Singularity Institute's Friendly AI guidelines, he suggests an AI goal architecture in which human happiness is determined by human behaviors indicating happiness: "Any artifact implementing 'learning' [...] must have 'human happiness' as its only initial reinforcement value [...] and 'human happiness' values are produced by an algorithm produced by supervised learning, to recognize happiness in human facial expressions, voices and body language, as trained by human behavior experts." Yudkowsky later criticized this proposal by remarking that such a utility function would be better satisfied by filling the Solar System with microscopic smiling mannequins than by making existing humans happier.

Others, like Ben Goertzel, an Artificial General Intelligence researcher and now Director of research at the Singularity Institute, support the basic principles of the Friendly Artificial Intelligence concept but believe that guaranteed Friendliness is not possible.

See also

Further reading

Discusses Artificial Intelligence from the perspective of Existential risk, introducing the term "Friendly AI". In particular, Sections 1-4 give background to the definition of Friendly AI in Section 5. Section 6 gives two classes of mistakes (technical and philosophical) which would both lead to the accidental creation of non-Friendly AIs. Sections 7-13 discuss further related issues.

  • Omohundro, S. 2008 The Basic AI Drives Appeared in AGI-08 - Proceedings of the First Conference on Artificial General Intelligence