Jump to content

Machine Intelligence Research Institute: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
→‎History: this citation is completely wrong.
→‎History: this quote does not appear in the citation. "MIRI" is not mentioned in this citation. This is pure OR fancruft, where some editor is commenting on aspects of the source; not summarizing the source. Revise, summarizing what the refs say with regard to MIRI.
Line 61: Line 61:
<blockquote>Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all. [...] Although we are facing potentially the best or worst thing ever to happen to humanity, little serious research is devoted to these issues outside small non-profit institutes such as the Cambridge [[Centre for the Study of Existential Risk|Center for Existential Risk]], the [[Future of Humanity Institute]], the Machine Intelligence Research Institute, and the [[Future of Life Institute]].<ref name="hawk"/></blockquote>
<blockquote>Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all. [...] Although we are facing potentially the best or worst thing ever to happen to humanity, little serious research is devoted to these issues outside small non-profit institutes such as the Cambridge [[Centre for the Study of Existential Risk|Center for Existential Risk]], the [[Future of Humanity Institute]], the Machine Intelligence Research Institute, and the [[Future of Life Institute]].<ref name="hawk"/></blockquote>


In early 2015 the Future of Life Institute organized a conference to set concrete research priorities to address the risks of AI; [[Max Tegmark]] of FLI recruited Elon Musk to get involved, and Musk agreed to provide $10 million to fund research projects addressing the priorities.<ref name=life>{{cite book |last1=Tegmark |first1=Max |title=[[Life 3.0: Being Human in the Age of Artificial Intelligence]] |date=2017 |publisher=[[Anchor Books|Knopf]] |location=United States |isbn=978-1-101-94659-6 }}</ref>{{rp|321, 325-26}} Benja Fallenstein of MIRI won a grant in the program.<ref name="post"/> This initiative and the publicity around it spurred further donations to fund research at MIRI and other other organizations focused on AI safety.<ref name=life/>{{rp|327}}
In early 2015, MIRI's research was cited in a research priorities document accompanying an [[Open Letter on Artificial Intelligence|open letter on AI]] that called for "expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial".<ref name="priorities">{{cite magazine |last1=Russell |first1=Stuart |last2=Dewey |first2=Daniel |last3=Tegmark |first3=Max |title=Research Priorities for Robust and Beneficial Artificial Intelligence |journal=AI Magazine |date=31 December 2015 |volume=36 |issue=4 |pages=105 |doi=10.1609/aimag.v36i4.2577}} {{open access}}</ref> Musk responded by funding a large AI safety grant program, with grant recipients including Bostrom, Russell, [[Bart Selman]], [[Francesca Rossi]], [[Thomas Dietterich]], [[Manuela M. Veloso]], and researchers at MIRI.<ref name="post"/> MIRI expanded as part of a general wave of increased interest in safety among other researchers in the AI community.<ref name=life>{{cite book |last1=Tegmark |first1=Max |title=[[Life 3.0: Being Human in the Age of Artificial Intelligence]] |date=2017 |publisher=[[Anchor Books|Knopf]] |location=United States |isbn=978-1-101-94659-6 }}</ref>


==Research==
==Research==

Revision as of 14:54, 25 August 2018

Machine Intelligence Research Institute
Formation2000; 24 years ago (2000)
TypeNonprofit research institute
Legal status501(c)(3) tax exempt charity
PurposeResearch into friendly artificial intelligence
Location
Edwin Evans
Nate Soares
Key people
Eliezer Yudkowsky
Staff
14
Websiteintelligence.org

The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit organization founded in 2000 to research safety issues related to the development of advanced artificial intelligence.

MIRI aims to create new tools and protocol to ensure the safety of future AI technology. The organization publishes research papers and hosts workshops to develop mathematical foundations for this project, and has been named as one of several academic and nonprofit groups studying long-term AI outcomes.[1][2][3]

History

In 2000, Eliezer Yudkowsky, Brian and Sabine Atkins founded the Singularity Institute for Artificial Intelligence to "help humanity prepare for the moment when machine intelligence exceeded human intelligence".[4][5] In early 2005, SIAI relocated from Atlanta, Georgia to Silicon Valley. From 2006 to 2012, the Institute collaborated with Singularity University to produce the Singularity Summit, a science and technology conference. Speakers included Steven Pinker, Peter Norvig, Stephen Wolfram, John Tooby, James Randi, and Douglas Hofstadter.[6][7]

In mid-2012, the Institute spun off a new organization called the Center for Applied Rationality, whose focus is on using ideas from cognitive science to improve people's effectiveness in their daily lives.[8] Having previously shortened its name to "Singularity Institute", in January 2013 SIAI changed its name to the "Machine Intelligence Research Institute" in order to avoid confusion with Singularity University.

In fall 2014, Nick Bostrom's book Superintelligence: Paths, Dangers, Strategies[9] helped spark public discussion about the work of researchers such as Yudkowsky on the risk of unsafe artificial general intelligence,[10] receiving endorsement from Elon Musk.[11][12] Stephen Hawking and AI pioneer Stuart Russell co-authored a Huffington Post article citing the work of MIRI and other organizations in the area:

Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all. [...] Although we are facing potentially the best or worst thing ever to happen to humanity, little serious research is devoted to these issues outside small non-profit institutes such as the Cambridge Center for Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute.[2]

In early 2015 the Future of Life Institute organized a conference to set concrete research priorities to address the risks of AI; Max Tegmark of FLI recruited Elon Musk to get involved, and Musk agreed to provide $10 million to fund research projects addressing the priorities.[10]: 321, 325–26  Benja Fallenstein of MIRI won a grant in the program.[3] This initiative and the publicity around it spurred further donations to fund research at MIRI and other other organizations focused on AI safety.[10]: 327 

Research

Nate Soares presenting an overview of the AI alignment problem.

Russell and Norvig's textbook Artificial Intelligence: A Modern Approach summarized the thesis of Eliezer Yudkowsky, co-founder and senior researcher:

If ultraintelligent machines are a possibility, we humans would do well to make sure that we design their predecessors in such a way that they design themselves to treat us well. [...] Yudkowsky (2008)[13] goes into more detail about how to design a Friendly AI. He asserts that friendliness (a desire not to harm humans) should be designed in from the start, but that the designers should recognize both that their own designs may be flawed, and that the robot will learn and evolve over time. Thus the problem is one of mechanism design—to define a mechanism for evolving AI systems under a system of checks and balances, and to give the systems utility functions that will remain friendly in the face of such changes.[14]

MIRI staff writer Matthew Graves argued for the importance of this topic in Skeptic magazine.[15] MIRI's work to solve it falls into the areas of forecasting, reliability, error tolerance, and value specification.

Forecasting

MIRI studies questions regarding what can or cannot be predicted about future AI technology, how to improve forecasting capability, and which interventions available today appear to be the most beneficial.[16]

MIRI researchers believe that progress in AI might accelerate rapidly after artificial general intelligence is developed, an idea stemming from I. J. Good's argument that sufficiently advanced AI systems will eventually outperform humans in software engineering tasks, leading to a feedback loop of increasingly capable AI systems.[17][13][18][12] However, MIRI researchers have also found that predictions by experts about the development of AI have been overoptimistic and biased, and no better than predictions by nonexperts. They have expressed skepticism about the views of singularity advocates like Ray Kurzweil that superintelligence is "just around the corner".[12] MIRI researchers advocate early safety work as a precautionary measure, with Soares stating that humans usually fail to prepare adequately for threats until it's too late.[19]

MIRI has also funded forecasting work through an initiative called AI Impacts. AI Impacts studies historical instances of discontinuous technological change, and has developed new measures of the relative computational power of humans and computer hardware.[20]

Reliability

MIRI research emphasizes the avoidance of faults in theoretical AI systems.[17] Their work includes formalizing cooperation in the prisoner's dilemma between "superrational" software agents[21] and defining an alternative to causal decision theory and evidential decision theory.[22]

Error tolerance

Soares believes that it will be hard to direct a clever AI system to achieve the goals that we really care about.[23] Soares and Fallenstein recommend research into "error-tolerant" software systems, citing human error and default incentives as sources of serious risk.[24][3] This includes Jessica Taylor's work on safer alternatives to maximization[25] and Bill Hibbard's work on modeling the behavior of AI agents.[26][27]

Value specification

Yudkowsky believes that determining the correct goals for autonomous systems is an ethical question.[28] He argues that the intentions of the operators are too vague and contextual to be easily coded.[29] Muelhauser and Bostrom argue that hard-coded moral values would eventually be seen as obsolete.[12] Soares and Fallenstein propose that autonomous AI systems instead be designed to inductively learn the values of humans from observational data.[17]

See also

References

  1. ^ GiveWell (2015). Potential risks from advanced artificial intelligence (Report). Retrieved 11 October 2015. {{cite report}}: Cite has empty unknown parameter: |coauthors= (help)
  2. ^ a b Hawking, Stephen; Tegmark, Max; Russell, Stuart; Wilczek, Frank (2014). "Transcending Complacency on Superintelligent Machines". The Huffington Post. Retrieved 11 October 2015.
  3. ^ a b c Basulto, Dominic (2015). "The very best ideas for preventing artificial intelligence from wrecking the planet". The Washington Post. Retrieved 11 October 2015.
  4. ^ Ackerman, Elise (2008). "Annual A.I. conference to be held this Saturday in San Jose". San Jose Mercury News. Retrieved 11 October 2015.
  5. ^ "Scientists Fear Day Computers Become Smarter Than Humans". Fox News Channel. Associated Press. 2007. Retrieved 12 October 2015.
  6. ^ Abate, Tom (2006). "Smarter than thou?". San Francisco Chronicle. Retrieved 12 October 2015.
  7. ^ Abate, Tom (2007). "Public meeting will re-examine future of artificial intelligence". San Francisco Chronicle. Retrieved 12 October 2015.
  8. ^ Chen, Angela (2014). "More Rational Resolutions". The Wall Street Journal. Retrieved 5 March 2015.
  9. ^ Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies (First edition. ed.). ISBN 0199678111.
  10. ^ a b c Tegmark, Max (2017). Life 3.0: Being Human in the Age of Artificial Intelligence. United States: Knopf. ISBN 978-1-101-94659-6.
  11. ^ D'Orazio, Dante (2014). "Elon Musk says artificial intelligence is 'potentially more dangerous than nukes'". The Verge. Retrieved 5 October 2015.
  12. ^ a b c d LaFrance, Adrienne (2015). "Building Robots With Better Morals Than Humans". The Atlantic. Retrieved 12 October 2015.
  13. ^ a b Yudkowsky, Eliezer (2008). "Artificial Intelligence as a Positive and Negative Factor in Global Risk". In Bostrom, Nick; Ćirković, Milan (eds.). Global Catastrophic Risks. Oxford University Press. ISBN 978-0199606504. {{cite book}}: External link in |chapterurl= (help); Unknown parameter |chapterurl= ignored (|chapter-url= suggested) (help)
  14. ^ Russell, Stuart; Norvig, Peter (2009). Artificial Intelligence: A Modern Approach. Prentice Hall. ISBN 978-0-13-604259-4.
  15. ^ Graves, Matthew (8 November 2017). "Why We Should Be Concerned About Artificial Superintelligence". Skeptic. The Skeptics Society. Retrieved 28 July 2018.
  16. ^ Bostrom, Nick; Yudkowsky, Eliezer (2014). "The Ethics of Artificial Intelligence". In Frankish, Keith; Ramsey, William (eds.). The Cambridge Handbook of Artificial Intelligence. New York: Cambridge University Press. ISBN 978-0-521-87142-6. {{cite book}}: External link in |chapterurl= (help); Unknown parameter |chapterurl= ignored (|chapter-url= suggested) (help)
  17. ^ a b c Soares, Nate; Fallenstein, Benja (2015). "Aligning Superintelligence with Human Interests: A Technical Research Agenda". In Miller, James; Yampolskiy, Roman; Armstrong, Stuart; et al. (eds.). The Technological Singularity: Managing the Journey. Springer. {{cite book}}: External link in |chapterurl= (help); Unknown parameter |chapterurl= ignored (|chapter-url= suggested) (help)
  18. ^ Cite error: The named reference priorities was invoked but never defined (see the help page).
  19. ^ Sathian, Sanjena. "The Most Important Philosophers of Our Time Reside in Silicon Valley". OZY. OZY. Retrieved 28 July 2018.
  20. ^ Hsu, Jeremy (2015). "Making Sure AI's Rapid Rise Is No Surprise". Discover. Retrieved 12 October 2015.
  21. ^ LaVictoire, Patrick; Fallenstein, Benja; Yudkowsky, Eliezer; Bárász, Mihály; Christiano, Paul; Herreshoff, Marcello (2014). "Program Equilibrium in the Prisoner's Dilemma via Löb's Theorem". Multiagent Interaction without Prior Coordination: Papers from the AAAI-14 Workshop. AAAI Publications.
  22. ^ Soares, Nate; Levinstein, Benjamin A. (2017). "Cheating Death in Damascus" (PDF). Formal Epistemology Workshop (FEW). Retrieved 28 July 2018.
  23. ^ Gallagher, Brian. "Scary AI Is More "Fantasia" Than "Terminator"". Nautilus. Retrieved 28 July 2018.
  24. ^ Soares, Nate; Fallenstein, Benja; Yudkowsky, Eliezer; Armstrong, Stuart (2015). "Corrigibility". AAAI Workshops: Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence, Austin, TX, January 25–26, 2015. AAAI Publications.
  25. ^ Taylor, Jessica (2016). "Quantilizers: A Safer Alternative to Maximizers for Limited Optimization". Workshops at the Thirtieth AAAI Conference on Artificial Intelligence.
  26. ^ Hibbard, Bill. "Avoiding Unintended AI Behaviors" (PDF). Artificial General Intelligence: 5th International Conference, AGI 2012, Oxford, UK, December 8–11, 2012. Proceedings.
  27. ^ Hibbard, Bill. "Decision Support for Safe AI Design" (PDF). Artificial General Intelligence: 5th International Conference, AGI 2012, Oxford, UK, December 8–11, 2012. Proceedings.
  28. ^ Clarke, Richard A. (2017). Warnings: Finding Cassandras to Stop Catastrophes. HarperCollins Publishers. ISBN 0062488023.
  29. ^ Yudkowsky, Eliezer (2011). "Complex Value Systems in Friendly AI" (PDF). Artificial General Intelligence: 4th International Conference, AGI 2011, Mountain View, CA, USA, August 3–6, 2011. Berlin: Springer.

External links