Machine Intelligence Research Institute

From Wikipedia, the free encyclopedia
Jump to: navigation, search
Machine Intelligence Research Institute
Formation 2000; 14 years ago (2000)
Type Nonprofit research institute
Legal status 501(c)(3) tax exempt charity
Purpose Research into friendly artificial intelligence
Location
Chair of the board Edwin Evans
Executive director Luke Muehlhauser
Key people Eliezer Yudkowsky
Revenue $1.7 million (2013)[1]
Staff 9[1]
Website intelligence.org
Formerly called Singularity Institute, Singularity Institute for Artificial Intelligence

The Machine Intelligence Research Institute (MIRI) is a non-profit organization founded in 2000 to research safety issues related to the development of Strong AI. The organization advocates ideas initially put forth by I. J. Good and Vernor Vinge regarding an "intelligence explosion", or Singularity, which MIRI thinks may follow the creation of sufficiently advanced AI.[2] Research fellow Eliezer Yudkowsky coined the term Friendly AI to refer to a hypothetical super-intelligent AI that has a positive impact on humanity.[3] The organization has argued that to be "Friendly" a self-improving AI needs to be constructed in a transparent, robust, and stable way.[4] MIRI hosts regular research workshops to develop the mathematical foundations for constructing Friendly AI.[5]

Luke Muehlhauser[6] is Executive Director. Inventor and futures studies author Ray Kurzweil served as one of its directors from 2007 to 2010.[7] The institute’s advisory board includes Oxford philosopher Nick Bostrom, PayPal co-founder Peter Thiel, and Foresight Nanotech Institute co-founder Christine Peterson. MIRI is tax exempt under Section 501(c)(3) of the United States Internal Revenue Code, and has a Canadian branch, SIAI-CA, formed in 2004 and recognized as a Charitable Organization by the Canada Revenue Agency.

Purpose[edit]

MIRI's purpose is "to ensure that the creation of smarter-than-human intelligence has a positive impact".[8] MIRI does not intend to program an AI that will have a positive impact, and their work does not involve any coding. Instead, they are working on solving mathematical and philosophical issues that arise when an agent has the ability to see how its mind is constructed and modify important parts of it. Their goal is to build a framework for the creation of a Friendly AI, to ensure that the first superintelligence is not (and does not become) an UnFriendly AI.

Friendly and UnFriendly AI[edit]

A friendly artificial intelligence is a hypothetical artificial general intelligence (AGI) that would have a positive rather than negative effect on humanity. An unfriendly artificial intelligence, conversely, would have an overall negative impact on humanity. This negative impact could range from the AI not accomplishing its goals quite the way we had originally intended, to the AI destroying humanity as an instrumental step to fulfilling one of its goals. According to Nick Bostrom, an artificial general intelligence will be unfriendly unless its goals are specifically designed to be aligned with human values.[9] The term was coined by Eliezer Yudkowsky[10] to discuss superintelligent artificial agents that reliably implement human values.

History[edit]

In 2000, Eliezer Yudkowsky[11] and Internet entrepreneurs Brian and Sabine Atkins founded the Singularity Institute for Artificial Intelligence to "help humanity prepare for the moment when machine intelligence exceeded human intelligence".[12] At first, it operated primarily over the Internet, receiving financial contributions from transhumanists and futurists.

In 2002, it published on its website the paper Levels of Organization in General Intelligence,[13] a preprint of a book chapter later included in a compilation of general AI theories, entitled "Artificial General Intelligence" (Ben Goertzel and Cassio Pennachin, eds.). Later that year, it released their two main introductory pieces, "What is the Singularity"[14] and "Why Work Toward the Singularity".[15]

In 2003, the Institute appeared at the Foresight Senior Associates Gathering, where co-founder Eliezer Yudkowsky presented a talk titled "Foundations of Order". They also made an appearance at the Transvision 2003 conference[16] at Yale University with a talk by institute volunteer Michael Anissimov.

In 2004, it released AsimovLaws.com,[17] a website that examined AI morality in the context of the I, Robot movie starring Will Smith, released just two days later. From July to October, the institute ran a Fellowship Challenge Grant that raised $35,000 over the course of three months. Early the next year, the Institute relocated from Atlanta, Georgia to Silicon Valley.

In February 2006, the Institute completed a $200,000 Singularity Challenge fundraising drive,[18] in which every donation up to $100,000 was matched by Clarium Capital President, PayPal co-founder and Institute Advisor Peter Thiel.[19] The stated uses of the funds included hiring additional full-time staff, an additional full-time research fellow position, and the organization of the Singularity Summit at Stanford.

From 2009-2012, the Institute released about a dozen papers on subjects including machine ethics, economic implications of AI, and decision theory topics.[20] Since 2009, MIRI has published 7 peer reviewed journal articles.[21]

Having previously shortened its name to simply Singularity Institute, in January 2013 it changed its name to the Machine Intelligence Research Institute in order to avoid confusion with Singularity University.[22]

Singularity Summit[edit]

In 2006, the Institute, along with the Symbolic Systems Program at Stanford, the Center for Study of Language and Information, KurzweilAI.net, and Peter Thiel, co-sponsored the Singularity Summit at Stanford.[23] The summit took place on 13 May 2006 at Stanford University with Thiel moderating and 1300 in attendance. The keynote speaker was Ray Kurzweil,[24] followed by eleven others: Nick Bostrom, Cory Doctorow, K. Eric Drexler, Douglas Hofstadter, Steve Jurvetson, Bill McKibben, Max More, Christine Peterson, John Smart, Sebastian Thrun, and Eliezer Yudkowsky.

The 2007 Singularity Summit took place on September 8-September 9, 2007, at the Palace of Fine Arts Theatre, San Francisco. A third Singularity Summit took place on October 25, 2008, at the Montgomery Theater in San Jose. The 2009 Singularity Summit took place on October 3, at the 92nd Street Y in New York City, New York. The 2010 Summit was held on August 14–15, 2010, at the Hyatt Regency in San Francisco.[25] The 2011 Summit was held October 16–17, 2011, at the 92nd St. Y in New York. The 2012 Singularity Summit was on the weekend of October 13–14 at the Nob Hill Masonic Center, 1111 California Street, San Francisco, CA.[26]

Center for Applied Rationality[edit]

In mid-2012, the Institute spun off a new organization called the Center for Applied Rationality, whose focus is to help people apply the principles of rationality in their day-to-day life and to research and develop de-biasing techniques.[27][28]

Key results and papers[edit]

MIRI’s research is concentrated in four areas: Computational self-reflection, decision procedures, value functions, and forecasting.

Computational self-reflection[edit]

In order to modify itself, an AGI will need to reason about its own behavior and prove that the modified version will continue to optimize for the correct goals. This leads to several fundamental problems such as the Löbian obstacle, where an agent cannot prove that a more powerful version of itself is consistent within the current version’s framework.[29] MIRI aims to develop a rigorous basis for self-reflective reasoning to overcome these obstacles.

Decision procedures[edit]

Standard decision procedures are not well-specified enough (e.g., with regard to counterfactuals) to be instantiated as algorithms.[30] These procedures also tend to be inconsistent under reflection: an agent that initially uses causal decision theory will regret doing so, and will attempt to change its own decision procedure. MIRI has developed Timeless Decision Theory, which is an extension of causal decision theory; it has been shown to avoid the failure modes of causal decision theory and evidential decision theory (such as the Prisoner’s Dilemma, Newcomb’s Paradox, etc).[31]

Value functions[edit]

Human values are complex and fragile; if you remove a small part of our value system, the outcome would be of no value to us. For example, if the value of “desiring new experiences” is removed, we would relive the same optimized experience ad infinitum without getting bored. How can we ensure that an artificial agent will create a future that we desire, instead of a perverse instantiation of our instructions that miss a critical aspect of what we value? This is known as the value-loading problem (from Bostrom’s Superintelligence).[32]

Forecasting[edit]

In addition to mathematical research, MIRI also studies strategic questions related to AGI, such as: What can (and can’t) we predict about future AI? How can we improve our forecasting ability? Which interventions available today appear to be the most beneficial, given what little we do know?

See also[edit]

References[edit]

  1. ^ a b "IRS Form 990". Machine Intelligence Research Institute. 2013. Retrieved 17 December 2014. 
  2. ^ Intelligence Explosion Microeconomics
  3. ^ What is Friendly AI?
  4. ^ MIRI Overview
  5. ^ Research workshops
  6. ^ About Us
  7. ^ I, Rodney Brooks, Am a Robot
  8. ^ The Foundations of AI safety
  9. ^ Bostrom, Nick (2014). "Is the default outcome doom?". Superintelligence: Paths, Dangers, Strategies (First edition. ed.). ISBN 0199678111. Proceeding from the idea of first-mover advantage, the orthogonality thesis, and the instrumental convergence thesis, we can now begin to see the outlines of an argument for fearing that a plausible default outcome of the creation of machine superintelligence is existential catastrophe. 
  10. ^ Tegmark, Max (2014). "Life, Our Universe and Everything". Our mathematical universe : my quest for the ultimate nature of reality (First edition. ed.). ISBN 9780307744258. Its owner may cede control to what Eliezer Yudkowsky terms a "Friendly AI,"... 
  11. ^ Scientists Fear Day Computers Become Smarter Than Humans September 12, 2007
  12. ^ Business Artificial Intelligence Conference in S.J. this week San Jose Mercury News (CA) - October 24, 2008 - 3E Business
  13. ^ Levels of Organization in General Intelligence
  14. ^ "What is the Singularity"
  15. ^ "Why Work Toward the Singularity"
  16. ^ "Humanity 2.0: transhumanists believe that human nature's a phase we'll outgrow, like adolescence. Someday we'll be full-fledged adult posthumans, with physical and intellectual powers of which we can now only dream. But will progress really make perfect?"
  17. ^ AsimovLaws.com
  18. ^ Singularity Challenge
  19. ^ The Singularity: Humanity's Last Invention?, Martin Kaste, National Public Radio
  20. ^ Singularity Institute - Recent Publications[dead link]
  21. ^ [1]
  22. ^ "We are now the “Machine Intelligence Research Institute” (MIRI)", Luke Muehlhauser, 30 January 2013
  23. ^ Smarter than thou?, San Francisco Chronicle, 12 May 2006
  24. ^ Public meeting will re-examine future of artificial intelligence Real brains are gathering in San Francisco to ponder the future of artificial intelligence, September 07, 2007. Tom Abate, SFGate.com
  25. ^ Silicon Valley tycoon embraces sci-fi future MSNBC Tech & Science
  26. ^ "Singularity Summit: Logistics". SingularitySummit.com. Retrieved 2012-09-25. 
  27. ^ "July 2012 Newsletter". Singularity Institute. 
  28. ^ "About Us". Center for Applied Rationality. 
  29. ^ Tiling Agents for Self-Modifying AI, and the Löbian Obstacle
  30. ^ A Comparison of Decision Algorithmson Newcomblike Problems
  31. ^ Timeless Decision Theory
  32. ^ Bostrom, Nick (2014). "Acquiring Values". Superintelligence: Paths, Dangers, Strategies (First edition. ed.). ISBN 0199678111. 

External links[edit]