Existential risk from artificial general intelligence: Difference between revisions

From Wikipedia, the free encyclopedia
Jump to: navigation, search
m (Criticisms)
m (Criticisms)
Line 82: Line 82:
 
<br />
 
<br />
 
== Criticisms ==
 
== Criticisms ==
# The scientific validity and significance of these scenarios are criticized by many AI researchers as unsound, and metaphysical reasoning. Much criticism has been made about the speculative, horror/science-fiction movie like reasoning that is not based on solid empirical work. Many scientists and engineers, including well-known machine learning experts such as [[Yann LeCun]], [[Yoshua Bengio]], and [[Ben Goertzel]] seem to believe that AI [[eschatology]] (existential AI risk) is a case of [[luddite]] [[cognitive bias]] and [[pseudo-scientific]] predictions. <ref>Bill Gates Fears A.I., but A.I. Researchers Know Better: The General Obsession With Super Intelligence Is Only Getting Bigger, and Dumber. By Erik Sofge Posted January 30, 2015 on Popular Science Magazine, http://www.popsci.com/bill-gates-fears-ai-ai-researchers-know-better</ref> <ref>Will Machines Eliminate Us? People who worry that we’re on course to invent dangerously intelligent machines are misunderstanding the state of computer science. By Will Knight on MIT Technology Review on January 29, 2016, Retrieved from: https://www.technologyreview.com/s/546301/will-machines-eliminate-us/</ref> <ref>Dr. Ben Goertzel's blog, The Singularity Institute's Scary Idea (and Why I Don't Buy It), Published on Friday, October 29, 2010, http://multiverseaccordingtoben.blogspot.com.tr/2010/10/singularity-institutes-scary-idea-and.html</ref> Furthermore, most of these claims were championed by openly [[agnostic]] philosophers like [[Nick Bostrom]] with controversial views like [[simulation argument|simulation hypothesis]] <ref>Bostrom's simulation argument is considered by his critics as a case of [[Intelligent Design]] since he uses the term "[[naturalist]] [[theogony]]" in his paper on the subject, and he talks of a hierarchy of gods and angels, as well, which is suspiciously close to [[biblical mythology]]. His paper posits a post-human programmer deity that can accurately simulate the surface of the Earth long enough to deceive humans, which is a computational analogue of [[young earth creationism]], see https://en.wikipedia.org/wiki/Nick_Bostrom#Simulation_argument</ref>, and doomsday argument <ref>[[Doomsday argument]] is a philosophical argument that is somewhat analogous to religious [[eschatology]] that a doomsday will likely happen, also known as Carter's catastrophe, and used in some amusing science-fiction novels)</ref> instead of technical AI researchers.
+
# The scientific validity and significance of these scenarios are criticized by many AI researchers as unsound, and metaphysical reasoning. Much criticism has been made about the speculative, horror/science-fiction movie like reasoning that is not based on solid empirical work. Many scientists and engineers, including well-known machine learning experts such as [[Yann LeCun]], [[Yoshua Bengio]], and [[Ben Goertzel]] seem to believe that AI [[eschatology]] (existential AI risk) is a case of [[luddite]] [[cognitive bias]] and [[pseudo-scientific]] predictions. <ref>Bill Gates Fears A.I., but A.I. Researchers Know Better: The General Obsession With Super Intelligence Is Only Getting Bigger, and Dumber. By Erik Sofge Posted January 30, 2015 on Popular Science Magazine, http://www.popsci.com/bill-gates-fears-ai-ai-researchers-know-better</ref> <ref>Will Machines Eliminate Us? People who worry that we’re on course to invent dangerously intelligent machines are misunderstanding the state of computer science. By Will Knight on MIT Technology Review on January 29, 2016, Retrieved from: https://www.technologyreview.com/s/546301/will-machines-eliminate-us/</ref> <ref>Dr. Ben Goertzel's blog, The Singularity Institute's Scary Idea (and Why I Don't Buy It), Published on Friday, October 29, 2010, http://multiverseaccordingtoben.blogspot.com.tr/2010/10/singularity-institutes-scary-idea-and.html</ref> Furthermore, most of these claims were championed by openly [[agnostic]] philosophers like [[Nick Bostrom]] with controversial views like [[simulation argument|simulation hypothesis]] <ref>Bostrom's simulation argument is considered by his critics as a case of [[Intelligent Design]] since he uses the term "[[naturalist]] [[theogony]]" in his paper on the subject, and he talks of a hierarchy of gods and angels, as well, which is suspiciously close to [[biblical mythology]]. His paper posits a post-human programmer deity that can accurately simulate the surface of the Earth long enough to deceive humans, which is a computational analogue of [[young earth creationism]], see https://en.wikipedia.org/wiki/Nick_Bostrom#Simulation_argument</ref>, and doomsday argument <ref>[[Doomsday argument]] is a philosophical argument that is somewhat analogous to religious [[eschatology]] that a doomsday will likely happen, also known as Carter's catastrophe, and used in some amusing science-fiction novels</ref> instead of technical AI researchers.
 
# Stephen Hawking and Elon Musk earned an international luddite award due to their support of the claims of AI eschatologists. In January 2016, the [[Information Technology and Innovation Foundation]] (ITIF) awarded the Annual Luddite Award to Stephen Hawking, Elon Musk, and artificial intelligence existential risk promoters (AI doomsayers) in FHI, MIRI, and FLI, stating that "raising sci-fi doomsday scenarios is unhelpful, because it spooks policymakers and the public, which is likely to erode support for more research, development, and adoption." <ref>Artificial Intelligence Alarmists Win ITIF’s Annual Luddite Award, Published on ITIF website on January 19, 2016, https://itif.org/publications/2016/01/19/artificial-intelligence-alarmists-win-itif%E2%80%99s-annual-luddite-award</ref> Further note that [[Future of Life Institute]] (FLI) published an incredibly egotistical dismissal of the luddite award they received, claiming they are employing the leading AI researchers in the world, which is not objectively the case, and could be interpreted as an attempt at disinformation. <ref>FLI's response to the luddite award they received. http://futureoflife.org/2015/12/24/think-tank-dismisses-leading-ai-researchers-as-luddites/</ref> Many researchers view their efforts as a case of inducing [[moral panic]], or employing [[Fear, Uncertainty, Doubt]] tactics to prevent disruptive technology from changing the world while earning a good income from fear-mongering.
 
# Stephen Hawking and Elon Musk earned an international luddite award due to their support of the claims of AI eschatologists. In January 2016, the [[Information Technology and Innovation Foundation]] (ITIF) awarded the Annual Luddite Award to Stephen Hawking, Elon Musk, and artificial intelligence existential risk promoters (AI doomsayers) in FHI, MIRI, and FLI, stating that "raising sci-fi doomsday scenarios is unhelpful, because it spooks policymakers and the public, which is likely to erode support for more research, development, and adoption." <ref>Artificial Intelligence Alarmists Win ITIF’s Annual Luddite Award, Published on ITIF website on January 19, 2016, https://itif.org/publications/2016/01/19/artificial-intelligence-alarmists-win-itif%E2%80%99s-annual-luddite-award</ref> Further note that [[Future of Life Institute]] (FLI) published an incredibly egotistical dismissal of the luddite award they received, claiming they are employing the leading AI researchers in the world, which is not objectively the case, and could be interpreted as an attempt at disinformation. <ref>FLI's response to the luddite award they received. http://futureoflife.org/2015/12/24/think-tank-dismisses-leading-ai-researchers-as-luddites/</ref> Many researchers view their efforts as a case of inducing [[moral panic]], or employing [[Fear, Uncertainty, Doubt]] tactics to prevent disruptive technology from changing the world while earning a good income from fear-mongering.
 
# The main argument for existential risk depends on a number of conjunctive assumptions whose probabilities are inflated, which makes the resulting probability seem to have significant probability, while many technical AGI researchers believe that this probability is at the level of improbable, comic-book scenarios, such as [[Galactus]] eating the world. <ref>The Maverick Nanny with a Dopamine Drip: Debunking Fallacies in the Theory of AI Motivation,
 
# The main argument for existential risk depends on a number of conjunctive assumptions whose probabilities are inflated, which makes the resulting probability seem to have significant probability, while many technical AGI researchers believe that this probability is at the level of improbable, comic-book scenarios, such as [[Galactus]] eating the world. <ref>The Maverick Nanny with a Dopamine Drip: Debunking Fallacies in the Theory of AI Motivation,

Revision as of 08:01, 5 February 2016

Existential risk from advanced artificial intelligence is the risk that progress in artificial intelligence (AI) could result in an unrecoverable global catastrophe, such as human extinction. The severity of different AI risk scenarios is widely debated, and rests on a number of unresolved questions about future progress in computer science.[1]

Stuart Russell and Peter Norvig's Artificial Intelligence: A Modern Approach, the standard undergraduate AI textbook, cites the possibility that an AI system's learning function "may cause it to evolve into a system with unintended behavior" as the most serious existential risk from AI technology.[2] Citing major advances in the field of AI and the potential for AI to have enormous long-term benefits or costs, the 2015 Open Letter on Artificial Intelligence stated:

The progress in AI research makes it timely to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI. Such considerations motivated the AAAI 2008-09 Presidential Panel on Long-Term AI Futures and other projects on AI impacts, and constitute a significant expansion of the field of AI itself, which up to now has focused largely on techniques that are neutral with respect to purpose. We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do.

This letter was signed by a number of leading AI researchers in academia and industry, including AAAI president Thomas Dietterich, Eric Horvitz, Bart Selman, Francesca Rossi, Yann LeCun, and the founders of Vicarious and Google DeepMind.[3]

Institutions such as the Machine Intelligence Research Institute, the Future of Humanity Institute,[4][5] the Future of Life Institute, and the Centre for the Study of Existential Risk are currently involved in mitigating existential risk from advanced artificial intelligence, for example by research into friendly artificial intelligence.[1][6][7]

History

In 1965 I. J. Good originated the concept now known as an "intelligence explosion":

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.[8]

Occasional statements from scholars such as Alan Turing,[9][10] I. J. Good,[11] and Marvin Minsky[12] indicated philosophical concerns that a superintelligence could seize control, but no call to action. In 2000 computer scientist and Sun co-founder Bill Joy penned an influential essay, "Why The Future Doesn't Need Us", identifying superintelligent robots as one of multiple high-tech dangers to human survival.[13] By 2015, public figures varying from Nobel laureate physicists Stephen Hawking and Frank Wilczek, to computer scientists Stuart J. Russell and Roman Yampolskiy,[14] and to entrepreneurs Elon Musk and Bill Gates were expressing concern about the risks of superintelligence.[7][15][16]

Basic argument

If superintelligent AI is possible, and if it is possible for a superintelligence's goals to conflict with basic human values, then AI poses a risk of human extinction. A superintelligence, which can be defined as a system that exceeds the capabilities of humans in every relevant endeavor, can outmaneuver humans any times its goals conflict with human goals; therefore, unless the superintelligence decides to allow humanity to survive, the first superintelligence to be created will inexorably result in human extinction.[17][18]

There is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains. The emergence of superintelligence, if and when it occurs, may take the human race by surprise.[7][15] An explosive transition is possible: as soon as human-level AI is possible, machines with human intelligence could repeatedly improve their design even further and quickly become superhuman. Just as the current-day survival of chimpanzees is dependent on humans decisions, so too would human survival depend on the decisions and goals of the superhuman AI. The result could be human extinction, or some other unrecoverable permanent global catastrophe.[17][18]

Risk scenarios

In 2009, experts attended a conference hosted by the Association for the Advancement of Artificial Intelligence (AAAI) to discuss whether computers and robots might be able to acquire any sort of autonomy, and how much these abilities might pose a threat or hazard. They noted that some robots have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence". They concluded that self-awareness as depicted in science fiction is probably unlikely, but that there were other potential hazards and pitfalls.[19]

The 2010s have seen substantial gains in AI functionality and autonomy.[20] Citing work by Nick Bostrom, entrepreneurs Bill Gates and Elon Musk have expressed concerns about the possibility that AI could eventually advance to the point that humans could not control it.[17][21] AI researcher Stuart Russell summarizes:

The primary concern is not spooky emergent consciousness but simply the ability to make high-quality decisions. Here, quality refers to the expected outcome utility of actions taken, where the utility function is, presumably, specified by the human designer. Now we have a problem:

  1. The utility function may not be perfectly aligned with the values of the human race, which are (at best) very difficult to pin down.
  2. Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources –- not for their own sake, but to succeed in its assigned task.

A system that is optimizing a function of n variables, where the objective depends on a subset of size k<n, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable. This is essentially the old story of the genie in the lamp, or the sorcerer's apprentice, or King Midas: you get exactly what you ask for, not what you want. A highly capable decision maker -- especially one connected through the Internet to all the world's information and billions of screens and most of our infrastructure -- can have an irreversible impact on humanity.

This is not a minor difficulty. Improving decision quality, irrespective of the utility function chosen, has been the goal of AI research –- the mainstream goal on which we now spend billions per year, not the secret plot of some lone evil genius.[22]

Dietterich and Horvitz echo the "Sorcerer's Apprentice" concern in a Communications of the ACM editorial, emphasizing the need for AI systems that can fluidly and unambiguously solicit human input as needed.[23]

Poorly specified goals: "Be careful what you wish for"

The first of Russell's concerns is that autonomous AI systems may be assigned the wrong goals by accident. Dietterich and Horvitz note that this is already a concern for existing systems: "An important aspect of any AI system that interacts with people is that it must reason about what people intend rather than carrying out commands literally." This concern becomes more serious as AI software advances in autonomy and flexibility.[23]

Isaac Asimov's Three Laws of Robotics are one of the earliest examples of proposed safety measures for AI agents. Asimov's laws were intended to prevent robots from harming humans. In Asimov's stories, problems with the laws tend to arise from conflicts between the rules as stated and the moral intuitions and expectations of humans. Citing work by AI theorist Eliezer Yudkowsky, Russell and Norvig note that a realistic set of rules and goals for an AI agent will need to incorporate a mechanism for learning human values over time: "We can't just give a program a static utility function, because circumstances, and our desired responses to circumstances, change over time."[2]

Misspecified goals were most apparent, and very real, in the early 1980s. Douglas Lenat's EURISKO, a heuristic learning program, was created with the capability of modifying itself to add new ideas, expand existing ones, or remove them entirely if they were deemed unnecessary. The program even went so far as to bend the rules for discovering new rules; in essence, it was capable of creating new ways for creativity. The program ended up becoming too creative and would self-modify too often, causing Lenat to limit its self-modification capacity. Without Lenat doing so, EURISKO would suffer from "goal mutation" where its initial task would be deemed unnecessary and a new goal deemed more appropriate.[24] This "goal mutation" would've had the potential to change an initial idea for ordering drones to scan an area for potential threats, to ordering drones to eliminate any and all possible targets in range.[citation needed]

The Open Philanthropy Project summarizes arguments to the effect that misspecified goals will become a much larger concern if AI systems achieve general intelligence or superintelligence. Bostrom, Russell, and others argue that smarter-than-human decision-making systems could arrive at more unexpected and extreme solutions to assigned tasks, and could modify themselves or their environment in ways that compromise safety requirements.[1][25]

Difficulties of "fixing" goal specification after launch

While current goal-based AI programs are not intelligent enough to think of resisting programmer attempts to modify it, a sufficiently advanced, rational, "self-aware" AI might resist any changes to its goal structure, just as Gandhi would not want to take a pill that makes him want to kill people. If the AI were superintelligent, it would be likely to out-maneuver its human operators and prevent being "turned off" or being programmed with a new goal.[17][26]

Instrumental goal convergence: Would a superintelligence just ignore us?

There are some goals that almost any artificial intelligence might pursue, like acquiring additional resources or self-preservation. This could prove problematic because it might put an artificial intelligence in direct competition with humans.

Citing Steve Omohundro's work on the idea of instrumental convergence, Russell and Norvig write that "even if you only want your program to play chess or prove theorems, if you give it the capability to learn and alter itself, you need safeguards". Highly capable and autonomous planning systems require additional checks because of their potential to generate plans that treat humans adversarially, as competitors for limited resources.[2]

Orthogonality: Does intelligence inevitably result in moral wisdom?

One common belief in science fiction is that any superintelligent program created by humans would be subservient to humans, or, better yet, would (as it grows more intelligent and learns more facts about the world) spontaneously "learn" a moral truth compatible with human values and would adjust its goals accordingly. Nick Bostrom's "orthogonality thesis" argues against this, and instead states that, with some technical caveats, more or less any level of "intelligence" or "optimization power" can be combined with more or less any ultimate goal. If a machine is created and given the sole purpose to enumerate the decimals of pi, then no moral and ethical rules will stop it from achieving its programmed goal by any means necessary. The machine may utilize all physical and informational resources it can to find every decimal of pi that can be found.[27] Bostrom warns against anthropomorphism: A human will set out to accomplish his projects in a manner than humans consider "reasonable"; an artificial intelligence may hold no regard for its existence or for the welfare of humans around it, only for the completion of the task.[28]

While the orthogonality thesis follows logically from even the weakest sort of philosophical "is-ought distinction", Stuart Armstrong argues that even if there somehow exist moral facts that are provable by any "rational" agent, the orthogonality thesis still holds: it would still be possible to create a non-philosophical "optimizing machine" capable of making decisions to strive towards some narrow goal, but that has no incentive to discover any "moral facts" that would get in the way of goal completion. One argument for the orthogonality thesis is that some AI designs appear to have orthogonality built into them; in such a design, changing a fundamentally friendly AI into an fundamentally unfriendly AI can be as simple as prepending a minus ("-") sign onto its utility function. A more intuitive argument is to examine the strange consequences if the orthogonality thesis is false. If the orthogonality thesis is false, there exists some simple goal G such that there cannot exist any efficient real-world algorithm with goal G. This means if a human society were highly motivated (perhaps at gunpoint) to design an efficient real-world algorithm with goal G, and were given a million years to do so along with huge amounts of resources, training and knowledge about AI, it must fail; that there cannot exist any pattern of reinforcement learning that would train a highly efficient real-world intelligence to follow the goal G; and that there cannot exist any evolutionary or environmental pressures that would evolve highly efficient real-world intelligences following goal G.[29]

Computer scientist Stuart Russell says the difficulty of aligning the goals of a superintelligence with human goals lies in the fact that, while (according to Russell) humans tend to mostly share the same values as each other, artificial superintelligences would not necessarily start out with the same values as humans.[30][not in citation given]

"Optimization power" vs. normatively thick models of intelligence

Part of the disagreement about whether a superintelligence machine would behave morally may arise from a terminological difference. Outside of the artificial intelligence field, "intelligence" is often used in a normatively thick manner that connotes moral wisdom or acceptance of agreeable forms of moral reasoning. At an extreme, if morality is part of the definition of intelligence, then by definition a superintelligent machine would behave morally. However, in artificial intelligence, while "intelligence" has many overlapping definitions, none of them reference morality. Instead, almost all current "artificial intelligence" research focuses on creating algorithms that "optimize", in an empirical way, the achievement of an arbitrary goal. To avoid anthropomorphism or the baggage of the word "intelligence", an advanced artificial intelligence can be thought of as an impersonal "optimizing process" that strictly takes whatever actions are judged most likely accomplish its (possibly complicated and implicit) goals.[17] Another way of conceptualizing an advanced artificial intelligence is to imagine a time machine that sends backward in time information about which choice always leads to the maximization of its goal function; this choice is then output, regardless of any extraneous ethical concerns.[31][32]

Other sources of risk

Other scenarios by which advanced AI could produce unintended consequences include:[33]

  • self-delusion, in which the AI discovers a way to alter its perceptions to give itself the delusion that it is succeeding in its goals,
  • corruption of the reward generator, in which the AI alters humans so that they are more likely to approve of AI actions, and
  • inconsistency of the AI's utility function and other parts of its definition. For example, an AI may be defined to maximize the expected value of a utility function and to also periodically revise its utility function to adapt to changing circumstances (as in the quote from Russell and Norvig above). The AI may choose the action of removing utility function revision from its own definition, in order to maximize the value of its current utility function.

James Barrat, documentary filmmaker and author of Our Final Invention, says in a Smithsonian interview, "Imagine: in as little as a decade, a half-dozen companies and nations field computers that rival or surpass human intelligence. Imagine what happens when those computers become expert at programming smart computers. Soon we’ll be sharing the planet with machines thousands or millions of times more intelligent than we are. And, all the while, each generation of this technology will be weaponized. Unregulated, it will be catastrophic."[34]


Criticisms

  1. The scientific validity and significance of these scenarios are criticized by many AI researchers as unsound, and metaphysical reasoning. Much criticism has been made about the speculative, horror/science-fiction movie like reasoning that is not based on solid empirical work. Many scientists and engineers, including well-known machine learning experts such as Yann LeCun, Yoshua Bengio, and Ben Goertzel seem to believe that AI eschatology (existential AI risk) is a case of luddite cognitive bias and pseudo-scientific predictions. [35] [36] [37] Furthermore, most of these claims were championed by openly agnostic philosophers like Nick Bostrom with controversial views like simulation hypothesis [38], and doomsday argument [39] instead of technical AI researchers.
  2. Stephen Hawking and Elon Musk earned an international luddite award due to their support of the claims of AI eschatologists. In January 2016, the Information Technology and Innovation Foundation (ITIF) awarded the Annual Luddite Award to Stephen Hawking, Elon Musk, and artificial intelligence existential risk promoters (AI doomsayers) in FHI, MIRI, and FLI, stating that "raising sci-fi doomsday scenarios is unhelpful, because it spooks policymakers and the public, which is likely to erode support for more research, development, and adoption." [40] Further note that Future of Life Institute (FLI) published an incredibly egotistical dismissal of the luddite award they received, claiming they are employing the leading AI researchers in the world, which is not objectively the case, and could be interpreted as an attempt at disinformation. [41] Many researchers view their efforts as a case of inducing moral panic, or employing Fear, Uncertainty, Doubt tactics to prevent disruptive technology from changing the world while earning a good income from fear-mongering.
  3. The main argument for existential risk depends on a number of conjunctive assumptions whose probabilities are inflated, which makes the resulting probability seem to have significant probability, while many technical AGI researchers believe that this probability is at the level of improbable, comic-book scenarios, such as Galactus eating the world. [42]
  4. Making an AGI system a fully autonomous agent is not necessary, and there are many obvious solutions to designing effective autonomous agents which are purposefully neglected by Bostrom and his aides, to make it seem like their reasoning is sound, however their proposed solutions and answers to such solutions are strawman arguments. They furthermore claim that it is impossible to implement any of the obvious solutions, which is also nonsensical, and they consistently try to censor all criticism of their work by social engineering and other academically unethical methods, such as removing harsh criticisms from this page.
  5. There is a conflict of interest between the claims of "AI existential risk" and organizations like MIRI, FHI, and FLI that promote such AI doomsaying/eschatology, as their funding is completely dependent on the public accepting their reasoning and donating to them as most eschatology organizations do.
  6. There are just too many atoms and resources in the solar system and the reachable universe for an AGI agent to needlessly risk war with humans. Therefore, there is no real reason for a supposedly very powerful AGI agent to wage war upon mankind, to realize any expansive, open-ended goal, the agent would most likely venture outside the solar system, than dealing with an irrational biological species.
  7. As for a consequential war in-between AGI agents with humans taking collateral damage, this could be of significance only if the two AGI agents were of nearly parallel intelligence. If in contrast, one AGI agent was substantially superior, the war would be over very quickly. By creating a "friendly" AGI agent which engages the "unfriendly" AGI agent in war, the humans could risk a self-fulfilling a doomsday prophecy. As an example, the Department of Defense has more to do with offense than with actual defense.
  8. While humans assert existential risks to themselves, they conveniently ignore existential risks to the sustenance of intelligent life in general in the galaxy, as would be remedied by the quick spread of AGI agents.
  9. Roko's basilisk offers an existential risk of its own, one which could actually be compounded by attending to the general existential risks. It is also a perfect reductio ad absurdum of everything that Yudkowsky and Bostrom have claimed about AI technology having an inherent "existential" risk. As a consequence of this apparent absurdity, Roko's basilisk was censored from the LessWrong community blog where AI eschatologists convene and discuss their apocalyptic fears.
  10. It is suggested by many critics that trying to design a provably "friendly", or "safe" autonomous agent imitating human ethics, or some other ideal behavior itself is the greatest risk from AI technology. Paradoxically, this would make FHI the greatest existential risk from AI technology. [43]
  11. Opportunities for hybridization, i.e. cyborgs, cannot be neglected. On the other hand, Nick Bostrom has repeatedly claimed that brain simulations, which is the primary means for technological immortality is also an existential risk, which casts doubt on his claims of being a transhumanist.

See also

References

  1. ^ a b c GiveWell (2015). Potential risks from advanced artificial intelligence (Report). Retrieved 11 October 2015. 
  2. ^ a b c Russell, Stuart; Norvig, Peter (2009). Artificial Intelligence: A Modern Approach. Prentice Hall. ISBN 978-0-13-604259-4. 
  3. ^ "Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter". Future of Life Institute. Retrieved 23 October 2015. 
  4. ^ Mark Piesing (17 May 2012). "AI uprising: humans will be outsourced, not obliterated". Wired. Retrieved December 12, 2015. 
  5. ^ Coughlan, Sean (24 April 2013). "How are humans going to become extinct?". BBC News. Retrieved 29 March 2014. 
  6. ^ "But What Would the End of Humanity Mean for Me?". The Atlantic. 9 May 2014. Retrieved December 12, 2015. 
  7. ^ a b c "Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence – but are we taking AI seriously enough?'". The Independent (UK). Retrieved 3 December 2014. 
  8. ^ I.J. Good, "Speculations Concerning the First Ultraintelligent Machine" (HTML), Advances in Computers, vol. 6, 1965. Error: This Template Has Been Deleted.{{Wayback}} is deleted use {{webarchive}} instead.
  9. ^ A M Turing, Intelligent Machinery, A Heretical Theory, 1951, reprinted Philosophia Mathematica (1996) 4(3): 256–260 doi:10.1093/philmat/4.3.256 "once the machine thinking method has started, it would not take long to outstrip our feeble powers. ... At some stage therefore we should have to expect the machines to take control, in the way that is mentioned in Samuel Butler's Erewhon"
  10. ^ Eden, Amnon H., et al. "Singularity hypotheses: An overview." Singularity Hypotheses. Springer Berlin Heidelberg, 2012. 1-12.
  11. ^ Barrat, James (2013). Our final invention : artificial intelligence and the end of the human era (First Edition. ed.). New York: St. Martin's Press. ISBN 9780312622374. In the bio, playfully written in the third person, Good summarized his life’s milestones, including a probably never before seen account of his work at Bletchley Park with Turing. But here’s what he wrote in 1998 about the first superintelligence, and his late-in-the-game U-turn: [The paper] 'Speculations Concerning the First Ultra-intelligent Machine' (1965) . . . began: 'The survival of man depends on the early construction of an ultra-intelligent machine.' Those were his [Good’s] words during the Cold War, and he now suspects that 'survival' should be replaced by 'extinction.' He thinks that, because of international competition, we cannot prevent the machines from taking over. He thinks we are lemmings. He said also that 'probably Man will construct the deus ex machina in his own image.' 
  12. ^ Russell, Stuart J.; Norvig, Peter (2003). "Section 26.3: The Ethics and Risks of Developing Artificial Intelligence". Artificial Intelligence: A Modern Approach. Upper Saddle River, N.J.: Prentice Hall. ISBN 0137903952. Similarly, Marvin Minsky once suggested that an AI program designed to solve the Riemann Hypothesis might end up taking over all the resources of Earth to build more powerful supercomputers to help achieve its goal. 
  13. ^ Anderson, Kurt (26 November 2014). "Enthusiasts and Skeptics Debate Artificial Intelligence". Vanity Fair. Retrieved 30 January 2016. 
  14. ^ Hsu, Jeremy (1 March 2012). "Control dangerous AI before it controls us, one expert says". NBC News. Retrieved 28 January 2016. 
  15. ^ a b "Stephen Hawking warns artificial intelligence could end mankind". BBC. 2 December 2014. Retrieved 3 December 2014. 
  16. ^ Eadicicco, Lisa (28 January 2015). "Bill Gates: Elon Musk Is Right, We Should All Be Scared Of Artificial Intelligence Wiping Out Humanity". Business Insider. Retrieved 30 January 2016. 
  17. ^ a b c d e Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies (First edition. ed.). ISBN 0199678111. 
  18. ^ a b "Clever cogs". The Economist. 9 August 2014. Retrieved 9 August 2014.  Syndicated at Business Insider
  19. ^ Scientists Worry Machines May Outsmart Man By JOHN MARKOFF, NY Times, July 26, 2009.
  20. ^ "The dawn of artificial intelligence". The Economist. 9 May 2015. Retrieved 1 February 2016. 
  21. ^ Rawlinson, Kevin. "Microsoft's Bill Gates insists AI is a threat". BBC News. Retrieved 30 January 2015. 
  22. ^ Russell, Stuart (2014). "Of Myths and Moonshine". Edge. Retrieved 23 October 2015. 
  23. ^ a b Dietterich, Thomas; Horvitz, Eric (2015). "Rise of Concerns about AI: Reflections and Directions" (PDF). Communications of the ACM. 58 (10): 38–40. doi:10.1145/2770869. Retrieved 23 October 2015. 
  24. ^ Lenat, Douglas (1982). "Eurisko: A Program That Learns New Heuristics and Domain ConceptsThe Nature of Heuristics III: Program Design and Results". Artificial Intelligence (Print). 21: 61–98. doi:10.1016/s0004-3702(83)80005-8. 
  25. ^ Bostrom, Nick; Cirkovic, Milan M. (2008). "15: Artificial Intelligence as a Positive and Negative Factor in Global Risk". Global Catastrophic Risks. Oxford: Oxford UP. pp. 308–343. 
  26. ^ Yudkowsky, Eliezer. "Complex value systems in friendly AI." In Artificial general intelligence, pp. 388-393. Springer Berlin Heidelberg, 2011.
  27. ^ Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies. Oxford, United Kingdom: Oxford University Press. p. 116. ISBN 978-0-19-967811-2. 
  28. ^ Bostrom, Nick (2012). "Superintelligent Will" (PDF). Nick Bostrom. Nick Bostrom. Retrieved 2015-10-29. 
  29. ^ Armstrong, Stuart. "General purpose intelligence: arguing the orthogonality thesis." Analysis and Metaphysics 12 (2013).
  30. ^ "Concerns of an Artificial Intelligence Pioneer | Quanta Magazine". www.quantamagazine.org. Retrieved 2015-10-29. 
  31. ^ Waser, Mark. "Rational Universal Benevolence: Simpler, Safer, and Wiser Than 'Friendly AI'." Artificial General Intelligence. Springer Berlin Heidelberg, 2011. 153-162. "Terminal-goaled intelligences are short-lived but mono-maniacally dangerous and a correct basis for concern if anyone is smart enough to program high-intelligence and unwise enough to want a paperclip-maximizer.
  32. ^ Koebler, Jason (2 February 2016). "Will Superintelligent AI Ignore Humans Instead of Destroying Us?". Vice Magazine. Retrieved 3 February 2016. This artificial intelligence is not a basically nice creature that has a strong drive for paperclips, which, so long as it's satisfied by being able to make lots of paperclips somewhere else, is then able to interact with you in a relaxed and carefree fashion where it can be nice with you," Yudkowsky said. "Imagine a time machine that sends backward in time information about which choice always leads to the maximum number of paperclips in the future, and this choice is then output—that's what a paperclip maximizer is. 
  33. ^ Ethical Artificial Intelligence, 5 November 2014
  34. ^ Hendry, Erica R. (January 21, 2014). "What Happens When Artificial Intelligence Turns On Us?". Smithsonian. Retrieved October 26, 2015. 
  35. ^ Bill Gates Fears A.I., but A.I. Researchers Know Better: The General Obsession With Super Intelligence Is Only Getting Bigger, and Dumber. By Erik Sofge Posted January 30, 2015 on Popular Science Magazine, http://www.popsci.com/bill-gates-fears-ai-ai-researchers-know-better
  36. ^ Will Machines Eliminate Us? People who worry that we’re on course to invent dangerously intelligent machines are misunderstanding the state of computer science. By Will Knight on MIT Technology Review on January 29, 2016, Retrieved from: https://www.technologyreview.com/s/546301/will-machines-eliminate-us/
  37. ^ Dr. Ben Goertzel's blog, The Singularity Institute's Scary Idea (and Why I Don't Buy It), Published on Friday, October 29, 2010, http://multiverseaccordingtoben.blogspot.com.tr/2010/10/singularity-institutes-scary-idea-and.html
  38. ^ Bostrom's simulation argument is considered by his critics as a case of Intelligent Design since he uses the term "naturalist theogony" in his paper on the subject, and he talks of a hierarchy of gods and angels, as well, which is suspiciously close to biblical mythology. His paper posits a post-human programmer deity that can accurately simulate the surface of the Earth long enough to deceive humans, which is a computational analogue of young earth creationism, see https://en.wikipedia.org/wiki/Nick_Bostrom#Simulation_argument
  39. ^ Doomsday argument is a philosophical argument that is somewhat analogous to religious eschatology that a doomsday will likely happen, also known as Carter's catastrophe, and used in some amusing science-fiction novels
  40. ^ Artificial Intelligence Alarmists Win ITIF’s Annual Luddite Award, Published on ITIF website on January 19, 2016, https://itif.org/publications/2016/01/19/artificial-intelligence-alarmists-win-itif%E2%80%99s-annual-luddite-award
  41. ^ FLI's response to the luddite award they received. http://futureoflife.org/2015/12/24/think-tank-dismisses-leading-ai-researchers-as-luddites/
  42. ^ The Maverick Nanny with a Dopamine Drip: Debunking Fallacies in the Theory of AI Motivation, Richard Patrick William Loosemore, 2014 AAAI Spring Symposium Series http://www.aaai.org/ocs/index.php/SSS/SSS14/paper/viewPaper/7752
  43. ^ http://www.exponentialtimes.net/videos/machine-ethics-rise-ai-eschatology