Jump to content

Machine Intelligence Research Institute

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Jytdog (talk | contribs) at 01:58, 28 August 2018 (trim unsourced stuff from infobox). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Machine Intelligence Research Institute
Formation2000; 24 years ago (2000)
TypeNonprofit research institute
PurposeResearch into friendly artificial intelligence
Location
Key people
Eliezer Yudkowsky
Websiteintelligence.org

The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit organization founded in 2000 by Eliezer Yudkowsky originally to accelerate the development of artificial intelligence but focused since 2005 on managing the risks of AI, mostly focused on a friendly AI approach.

History

In 2000, Eliezer Yudkowsky, who was mostly self-educated and had been involved in the Extropian group, founded the Singularity Institute for Artificial Intelligence with funding from Brian and Sabine Atkins; the original purpose of the institute was to accelerate the development of artificial intelligence(AI).[1][2][3] Yudkowsky began to be concerned about the managing the risks of AI,[1] and in 2005 the institute moved to Silicon Valley and began focus on the risks; this was at a time when the scientists in the field were largely unconcerned with them but a group known as transhumanists were expressing concerns.[2]

Starting in 2006, the Institute organized the Singularity Summit to discuss the future of AI including its risks, initially in cooperation with Stanford University and funding from Peter Thiel; the San Francisco Chronicle described the first conference as a "Bay Area coming-out party for the tech-inspired philosophy called transhumanism".[4][5] In 2011, its offices were four apartments in downtown Berkeley.[6] In December 2012, the institute sold its name, web domain, and the Singularity Summit to Singularity University,[7] and the next month took the name "Machine Intelligence Research Institute".[8]

In 2014 public and scientific interest grew in the risks of AI, and continued developing after the Future of Life Institute organized a highly publicized scientific conference to set concrete research priorities to address the risks of AI; this shift from something once considered "crackpot" to the mainstream spurred further donations to fund research at MIRI and similar organizations.[3][9]: 327 

Research and approach

Nate Soares presenting an overview of the AI alignment problem at Google.

MIRI's approach to identify and manage the risks of AI, led by Yudkowsky, is mostly about how to design friendly AI - both initial design of AI systems and mechanisms to ensure that evolving AI systems remain friendly.[3][10][11]

MIRI researchers advocate early safety work as a precautionary measure, before it is too late..[12] However MIRI researchers have expressed skepticism about the views of singularity advocates like Ray Kurzweil that superintelligence is "just around the corner".[10] MIRI has funded forecasting work through an initiative called AI Impacts, which studies historical instances of discontinuous technological change, and has developed new measures of the relative computational power of humans and computer hardware.[13]

Works by MIRI staff

  • Graves, Matthew (8 November 2017). "Why We Should Be Concerned About Artificial Superintelligence". Skeptic. The Skeptics Society. Retrieved 28 July 2018.
  • LaVictoire, Patrick; Fallenstein, Benja; Yudkowsky, Eliezer; Bárász, Mihály; Christiano, Paul; Herreshoff, Marcello (2014). "Program Equilibrium in the Prisoner's Dilemma via Löb's Theorem". Multiagent Interaction without Prior Coordination: Papers from the AAAI-14 Workshop. AAAI Publications.
  • Soares, Nate; Levinstein, Benjamin A. (2017). "Cheating Death in Damascus" (PDF). Formal Epistemology Workshop (FEW). Retrieved 28 July 2018.
  • Soares, Nate; Fallenstein, Benja; Yudkowsky, Eliezer; Armstrong, Stuart (2015). "Corrigibility". AAAI Workshops: Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence, Austin, TX, January 25–26, 2015. AAAI Publications.
  • Soares, Nate; Fallenstein, Benja (2015). "Aligning Superintelligence with Human Interests: A Technical Research Agenda". In Miller, James; Yampolskiy, Roman; Armstrong, Stuart; et al. (eds.). The Technological Singularity: Managing the Journey. Springer. {{cite book}}: External link in |chapterurl= (help); Unknown parameter |chapterurl= ignored (|chapter-url= suggested) (help)
  • Yudkowsky, Eliezer (2008). "Artificial Intelligence as a Positive and Negative Factor in Global Risk". In Bostrom, Nick; Ćirković, Milan (eds.). Global Catastrophic Risks. Oxford University Press. ISBN 978-0199606504. {{cite book}}: External link in |chapterurl= (help); Unknown parameter |chapterurl= ignored (|chapter-url= suggested) (help)
  • Taylor, Jessica (2016). "Quantilizers: A Safer Alternative to Maximizers for Limited Optimization". Workshops at the Thirtieth AAAI Conference on Artificial Intelligence.
  • Yudkowsky, Eliezer (2011). "Complex Value Systems in Friendly AI" (PDF). Artificial General Intelligence: 4th International Conference, AGI 2011, Mountain View, CA, USA, August 3–6, 2011. Berlin: Springer.

See also

References

  1. ^ a b "MIRI: Artificial Intelligence: The Danger of Good Intentions - Future of Life Institute". Future of Life Institute. 11 October 2015.
  2. ^ a b Khatchadourian, Raffi. "The Doomsday Invention". The New Yorker.
  3. ^ a b c Waters, Richard (31 October 2014). "Artificial intelligence: machine v man". Financial Times. Retrieved 27 August 2018.
  4. ^ Abate, Tom (2006). "Smarter than thou?". San Francisco Chronicle. Retrieved 12 October 2015.
  5. ^ Abate, Tom (2007). "Public meeting will re-examine future of artificial intelligence". San Francisco Chronicle. Retrieved 12 October 2015.
  6. ^ Kaste, Martin (January 11, 2011). "The Singularity: Humanity's Last Invention?". All Things Considered, NPR.
  7. ^ "Press release: Singularity University Acquires the Singularity Summitt". Singularity University. 9 December 2012.
  8. ^ "Press release: We are now the "Machine Intelligence Research Institute" (MIRI) - Machine Intelligence Research Institute". Machine Intelligence Research Institute. 30 January 2013.
  9. ^ Tegmark, Max (2017). Life 3.0: Being Human in the Age of Artificial Intelligence. United States: Knopf. ISBN 978-1-101-94659-6.
  10. ^ a b LaFrance, Adrienne (2015). "Building Robots With Better Morals Than Humans". The Atlantic. Retrieved 12 October 2015.
  11. ^ Russell, Stuart; Norvig, Peter (2009). Artificial Intelligence: A Modern Approach. Prentice Hall. ISBN 978-0-13-604259-4.
  12. ^ Sathian, Sanjena. "The Most Important Philosophers of Our Time Reside in Silicon Valley". OZY. OZY. Retrieved 28 July 2018.
  13. ^ Hsu, Jeremy (2015). "Making Sure AI's Rapid Rise Is No Surprise". Discover. Retrieved 12 October 2015.

Further reading