Jump to content

Machine Intelligence Research Institute

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Kbog (talk | contribs) at 08:29, 27 August 2018 (→‎Research). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Machine Intelligence Research Institute
Formation2000; 24 years ago (2000)
TypeNonprofit research institute
Legal status501(c)(3) tax exempt charity
PurposeResearch into friendly artificial intelligence
Location
Edwin Evans
Nate Soares
Key people
Eliezer Yudkowsky
Staff
14
Websiteintelligence.org

The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit organization founded in 2000 to research safety issues related to the development of advanced artificial intelligence.

MIRI aims to create new tools and protocol to ensure the safety of future AI technology. The organization publishes research papers and hosts workshops to develop mathematical foundations for this project, and has been named as one of several academic and nonprofit groups studying long-term AI outcomes.[1][2][3]

History

In 2000, Eliezer Yudkowsky, Brian and Sabine Atkins founded the Singularity Institute for Artificial Intelligence to "help humanity prepare for the moment when machine intelligence exceeded human intelligence".[4][5] In early 2005, SIAI relocated from Atlanta, Georgia to Silicon Valley. From 2006 to 2012, the Institute collaborated with Singularity University to produce the Singularity Summit, a science and technology conference. Speakers included Steven Pinker, Peter Norvig, Stephen Wolfram, John Tooby, James Randi, and Douglas Hofstadter.[6][7]

In mid-2012, the Institute spun off a new organization called the Center for Applied Rationality, whose focus is on using ideas from cognitive science to improve people's effectiveness in their daily lives.[8] Having previously shortened its name to "Singularity Institute", in January 2013 SIAI changed its name to the "Machine Intelligence Research Institute" in order to avoid confusion with Singularity University.

In 2014, Stephen Hawking and AI pioneer Stuart Russell co-authored a Huffington Post article citing the work of MIRI and other organizations in the area:

Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all. [...] Although we are facing potentially the best or worst thing ever to happen to humanity, little serious research is devoted to these issues outside small non-profit institutes such as the Cambridge Center for Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute.[2]

In early 2015 the Future of Life Institute organized a conference to set concrete research priorities to address the risks of AI. Benja Fallenstein of MIRI won a $250,000 grant in the resulting funding program.[3]

Research

Nate Soares presenting an overview of the AI alignment problem at Google.

Russell and Norvig's textbook Artificial Intelligence: A Modern Approach summarized the thesis of Eliezer Yudkowsky, co-founder and senior researcher:

If ultraintelligent machines are a possibility, we humans would do well to make sure that we design their predecessors in such a way that they design themselves to treat us well. [...] Yudkowsky (2008)[9] goes into more detail about how to design a Friendly AI. He asserts that friendliness (a desire not to harm humans) should be designed in from the start, but that the designers should recognize both that their own designs may be flawed, and that the robot will learn and evolve over time. Thus the problem is one of mechanism design—to define a mechanism for evolving AI systems under a system of checks and balances, and to give the systems utility functions that will remain friendly in the face of such changes.[10]

MIRI researchers believe that progress in AI might accelerate rapidly after artificial general intelligence is developed, to the point where an agent becomes superintelligent and difficult to control.[11][12] However, MIRI researchers have also found that predictions by experts about the development of AI have been overoptimistic and biased, and no better than predictions by nonexperts. They have expressed skepticism about the views of singularity advocates like Ray Kurzweil that superintelligence is "just around the corner".[12] MIRI has funded forecasting work through an initiative called AI Impacts, which studies historical instances of discontinuous technological change, and has developed new measures of the relative computational power of humans and computer hardware.[13] MIRI researchers advocate early safety work as a precautionary measure, with Soares stating that humans usually fail to prepare adequately for threats until it's too late,[14] and staff writer Matthew Graves arguing that it is a difficult problem.[15]

MIRI technical research emphasizes the avoidance of faults in theoretical AI systems.[16] Soares and Fallenstein recommend research into "error-tolerant" software systems, citing human error and default incentives as sources of serious risk.[17][3] Jessica Taylor has worked on safer alternatives to maximization.[18] MIRI's work also includes formalizing cooperation in the prisoner's dilemma between "superrational" software agents[19] and defining an alternative to causal decision theory and evidential decision theory.[20]

Yudkowsky believes that determining the correct goals for autonomous systems is an ethical question,[21] and that superintelligent AI with destructive goals would not be controllable.[22] He argues that the intentions of the operators are too vague and contextual to be easily coded.[23] Muelhauser and Bostrom argue that hard-coded moral values would eventually be seen as obsolete.[12] Soares argues that aligning powerful AI with our values will be difficult,[24][25] and proposes that autonomous AI systems be designed to inductively learn the values of humans from observational data.[16]

See also

References

  1. ^ GiveWell (2015). Potential risks from advanced artificial intelligence (Report). Retrieved 11 October 2015. {{cite report}}: Cite has empty unknown parameter: |coauthors= (help)
  2. ^ a b Hawking, Stephen; Tegmark, Max; Russell, Stuart; Wilczek, Frank (2014). "Transcending Complacency on Superintelligent Machines". The Huffington Post. Retrieved 11 October 2015.
  3. ^ a b c Basulto, Dominic (2015). "The very best ideas for preventing artificial intelligence from wrecking the planet". The Washington Post. Retrieved 11 October 2015.
  4. ^ Ackerman, Elise (2008). "Annual A.I. conference to be held this Saturday in San Jose". San Jose Mercury News. Retrieved 11 October 2015.
  5. ^ "Scientists Fear Day Computers Become Smarter Than Humans". Fox News Channel. Associated Press. 2007. Retrieved 12 October 2015.
  6. ^ Abate, Tom (2006). "Smarter than thou?". San Francisco Chronicle. Retrieved 12 October 2015.
  7. ^ Abate, Tom (2007). "Public meeting will re-examine future of artificial intelligence". San Francisco Chronicle. Retrieved 12 October 2015.
  8. ^ Chen, Angela (2014). "More Rational Resolutions". The Wall Street Journal. Retrieved 5 March 2015.
  9. ^ Yudkowsky, Eliezer (2008). "Artificial Intelligence as a Positive and Negative Factor in Global Risk". In Bostrom, Nick; Ćirković, Milan (eds.). Global Catastrophic Risks. Oxford University Press. ISBN 978-0199606504. {{cite book}}: External link in |chapterurl= (help); Unknown parameter |chapterurl= ignored (|chapter-url= suggested) (help)
  10. ^ Russell, Stuart; Norvig, Peter (2009). Artificial Intelligence: A Modern Approach. Prentice Hall. ISBN 978-0-13-604259-4.
  11. ^ Waters, Richard (31 October 2014). "Artificial intelligence: machine v man". Financial Times. Retrieved 27 August 2018.
  12. ^ a b c LaFrance, Adrienne (2015). "Building Robots With Better Morals Than Humans". The Atlantic. Retrieved 12 October 2015.
  13. ^ Hsu, Jeremy (2015). "Making Sure AI's Rapid Rise Is No Surprise". Discover. Retrieved 12 October 2015.
  14. ^ Sathian, Sanjena. "The Most Important Philosophers of Our Time Reside in Silicon Valley". OZY. OZY. Retrieved 28 July 2018.
  15. ^ Graves, Matthew (8 November 2017). "Why We Should Be Concerned About Artificial Superintelligence". Skeptic. The Skeptics Society. Retrieved 28 July 2018.
  16. ^ a b Soares, Nate; Fallenstein, Benja (2015). "Aligning Superintelligence with Human Interests: A Technical Research Agenda". In Miller, James; Yampolskiy, Roman; Armstrong, Stuart; et al. (eds.). The Technological Singularity: Managing the Journey. Springer. {{cite book}}: External link in |chapterurl= (help); Unknown parameter |chapterurl= ignored (|chapter-url= suggested) (help)
  17. ^ Soares, Nate; Fallenstein, Benja; Yudkowsky, Eliezer; Armstrong, Stuart (2015). "Corrigibility". AAAI Workshops: Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence, Austin, TX, January 25–26, 2015. AAAI Publications.
  18. ^ Taylor, Jessica (2016). "Quantilizers: A Safer Alternative to Maximizers for Limited Optimization". Workshops at the Thirtieth AAAI Conference on Artificial Intelligence.
  19. ^ LaVictoire, Patrick; Fallenstein, Benja; Yudkowsky, Eliezer; Bárász, Mihály; Christiano, Paul; Herreshoff, Marcello (2014). "Program Equilibrium in the Prisoner's Dilemma via Löb's Theorem". Multiagent Interaction without Prior Coordination: Papers from the AAAI-14 Workshop. AAAI Publications.
  20. ^ Soares, Nate; Levinstein, Benjamin A. (2017). "Cheating Death in Damascus" (PDF). Formal Epistemology Workshop (FEW). Retrieved 28 July 2018.
  21. ^ Clarke, Richard A. (2017). Warnings: Finding Cassandras to Stop Catastrophes. HarperCollins Publishers. ISBN 0062488023.
  22. ^ Dowd, Maureen (March 26, 2017). "Elon Musk's Billion-Dollar Crusade to Stop the A.I. Apocalypse". Vanity Fair. Retrieved 27 August 2018.
  23. ^ Yudkowsky, Eliezer (2011). "Complex Value Systems in Friendly AI" (PDF). Artificial General Intelligence: 4th International Conference, AGI 2011, Mountain View, CA, USA, August 3–6, 2011. Berlin: Springer.
  24. ^ Gallagher, Brian. "Scary AI Is More "Fantasia" Than "Terminator"". Nautilus. Retrieved 28 July 2018.
  25. ^ Russell, Stuart; Dewey, Daniel; Tegmark, Max (Winter 2015). "Research Priorities for Robust and Beneficial Artificial Intelligence". AI Magazine. 36 (4): 6. Retrieved 27 August 2018.