Machine Intelligence Research Institute
|This article relies too much on references to primary sources. (June 2015)|
|Type||Nonprofit research institute|
|Legal status||501(c)(3) tax exempt charity|
|Purpose||Research into friendly artificial intelligence|
|$1.7 million (2013)|
|Singularity Institute, Singularity Institute for Artificial Intelligence|
The Machine Intelligence Research Institute (MIRI) is a non-profit organization founded in 2000 to research safety issues related to the development of Strong AI. The organization advocates ideas initially put forth by I. J. Good and Vernor Vinge regarding an "intelligence explosion", or Singularity, which MIRI thinks may follow the creation of sufficiently advanced AI. Research fellow Eliezer Yudkowsky coined the term Friendly AI to refer to a hypothetical super-intelligent AI that has a positive impact on humanity. The organization has argued that to be "Friendly" a self-improving AI needs to be constructed in a transparent, robust, and stable way. MIRI hosts regular research workshops to develop the mathematical foundations for constructing Friendly AI.
Nate Soares is the current Executive Director, taking over from Luke Muehlhauser in May 2015. Inventor and futures studies author Ray Kurzweil served as one of its directors from 2007 to 2010. The institute’s advisory board includes Oxford philosopher Nick Bostrom, PayPal co-founder Peter Thiel, and Foresight Institute co-founder Christine Peterson. MIRI is tax exempt under Section 501(c)(3) of the United States Internal Revenue Code, and has a Canadian branch, SIAI-CA, formed in 2004 and recognized as a Charitable Organization by the Canada Revenue Agency.
MIRI's purpose is "to ensure that the creation of smarter-than-human intelligence has a positive impact". MIRI does not intend to program an AI that will have a positive impact, and their work does not involve any coding. Instead, they are working on solving mathematical and philosophical issues that arise when an agent has the ability to see how its mind is constructed and modify important parts of it. Their goal is to build a framework for the creation of a Friendly AI, to ensure that the first superintelligence is not (and does not become) an unfriendly AI.
Friendly and unfriendly AI
A friendly artificial intelligence is a hypothetical artificial general intelligence (AGI) that would have a positive rather than negative effect on humanity. An unfriendly artificial intelligence, conversely, would have an overall negative impact on humanity. This negative impact could range from the AI not accomplishing its goals quite the way we had originally intended, to the AI destroying humanity as an instrumental step to fulfilling one of its goals. According to Nick Bostrom, an artificial general intelligence will be unfriendly unless its goals are specifically designed to be aligned with human values. The term was coined by Eliezer Yudkowsky to discuss superintelligent artificial agents that reliably implement human values.
Key results and papers
MIRI’s research is concentrated in four areas: Computational self-reflection, decision procedures, value functions, and forecasting.
In order to modify itself, an AGI will need to reason about its own behavior and prove that the modified version will continue to optimize for the correct goals. This leads to several fundamental problems such as the Löbian obstacle, where an agent cannot prove that a more powerful version of itself is consistent within the current version’s framework. MIRI aims to develop a rigorous basis for self-reflective reasoning to overcome these obstacles.
- Problems of self-reference in self-improving space-time embedded intelligence
- Definability of Truth in Probabilistic Logic
Standard decision procedures are not well-specified enough (e.g., with regard to counterfactuals) to be instantiated as algorithms. These procedures also tend to be inconsistent under reflection: an agent that initially uses causal decision theory will regret doing so, and will attempt to change its own decision procedure. MIRI has developed Timeless Decision Theory, which is an extension of causal decision theory; it has been shown to avoid the failure modes of causal decision theory and evidential decision theory (such as the Prisoner’s Dilemma, Newcomb’s Paradox, etc).
- Robust Cooperation in the Prisoner's Dilemma: Program Equilibrium via Provability Logic
- A Comparison of Decision Algorithms on Newcomblike Problems
Human values are complex and fragile; if you remove a small part of our value system, the outcome would be of no value to us. For example, if the value of “desiring new experiences” is removed, we would relive the same optimized experience ad infinitum without getting bored. How can we ensure that an artificial agent will create a future that we desire, instead of a perverse instantiation of our instructions that miss a critical aspect of what we value? This is known as the value-loading problem (from Bostrom’s Superintelligence).
In addition to mathematical research, MIRI also studies strategic questions related to AGI, such as: What can (and can’t) we predict about future AI? How can we improve our forecasting ability? Which interventions available today appear to be the most beneficial, given what little we do know?
In 2000, Eliezer Yudkowsky and Internet entrepreneurs Brian and Sabine Atkins founded the Singularity Institute for Artificial Intelligence to "help humanity prepare for the moment when machine intelligence exceeded human intelligence". At first, it operated primarily over the Internet, receiving financial contributions from transhumanists and futurists.
In 2002, it published on its website the paper Levels of Organization in General Intelligence, a preprint of a book chapter later included in a compilation of general AI theories, entitled "Artificial General Intelligence" (Ben Goertzel and Cassio Pennachin, eds.). Later that year, it released their two main introductory pieces, "What is the Singularity" and "Why Work Toward the Singularity".
In 2003, the Institute appeared at the Foresight Senior Associates Gathering, where co-founder Eliezer Yudkowsky presented a talk titled "Foundations of Order". They also made an appearance at the Transvision 2003 conference at Yale University with a talk by institute volunteer Michael Anissimov.
In 2004, it released AsimovLaws.com, a website that examined AI morality in the context of the I, Robot movie starring Will Smith, released just two days later. From July to October, the institute ran a Fellowship Challenge Grant that raised $35,000 over the course of three months. Early the next year, the Institute relocated from Atlanta, Georgia to Silicon Valley.
In February 2006, the Institute completed a $200,000 Singularity Challenge fundraising drive, in which every donation up to $100,000 was matched by Clarium Capital President, PayPal co-founder and Institute Advisor Peter Thiel. The stated uses of the funds included hiring additional full-time staff, an additional full-time research fellow position, and the organization of the Singularity Summit at Stanford.
From 2009-2012, the Institute released about a dozen papers on subjects including machine ethics, economic implications of AI, and decision theory topics. Since 2009, MIRI has published 7 peer reviewed journal articles.
Having previously shortened its name to simply Singularity Institute, in January 2013 it changed its name to the Machine Intelligence Research Institute in order to avoid confusion with Singularity University.
In 2006, the Institute, along with the Symbolic Systems Program at Stanford, the Center for Study of Language and Information, KurzweilAI.net, and Peter Thiel, co-sponsored the Singularity Summit at Stanford. The summit took place on 13 May 2006 at Stanford University with Thiel moderating and 1300 in attendance. The keynote speaker was Ray Kurzweil, followed by eleven others: Nick Bostrom, Cory Doctorow, K. Eric Drexler, Douglas Hofstadter, Steve Jurvetson, Bill McKibben, Max More, Christine Peterson, John Smart, Sebastian Thrun, and Eliezer Yudkowsky.
The 2007 Singularity Summit took place on September 8-September 9, 2007, at the Palace of Fine Arts Theatre, San Francisco. A third Singularity Summit took place on October 25, 2008, at the Montgomery Theater in San Jose. The 2009 Singularity Summit took place on October 3, at the 92nd Street Y in New York City, New York. The 2010 Summit was held on August 14–15, 2010, at the Hyatt Regency in San Francisco. The 2011 Summit was held October 16–17, 2011, at the 92nd St. Y in New York. The 2012 Singularity Summit was on the weekend of October 13–14 at the Nob Hill Masonic Center, 1111 California Street, San Francisco, CA.
Center for Applied Rationality
In mid-2012, the Institute spun off a new organization called the Center for Applied Rationality (CFAR), whose focus is to help people apply the principles of rationality in their day-to-day life and to research and develop de-biasing techniques. The organisation is based in the San Francisco Bay Area, Berkeley, California. CFAR develops and tests strategies of cognitive tools and triggers that are know from research in the field of cognitive science on how people form and change their beliefs. The organisation also gives workshops to train people to internalize and use strategies based the principles of rationality on a more regular basis to improve their reasoning and decision making skills and achieve goals. According to its co-founder and president Julia Galef the term "Applied" refers to a practical version of rationality in which people not only know how to be rational but also understand when being rational makes a difference. Among the exercises taught in the three-day workshops are Goal Factoring, Pre-Hindsight and Structured Procrastination.
- "IRS Form 990" (PDF). Machine Intelligence Research Institute. 2013. Retrieved 17 December 2014.
- Intelligence Explosion Microeconomics
- What is Friendly AI?
- MIRI Overview
- Research workshops
- New Executive Director
- About Luke Muehlhauser
- I, Rodney Brooks, Am a Robot
- The Foundations of AI safety
- Bostrom, Nick (2014). "Is the default outcome doom?". Superintelligence: Paths, Dangers, Strategies (First edition. ed.). ISBN 0199678111.
Proceeding from the idea of first-mover advantage, the orthogonality thesis, and the instrumental convergence thesis, we can now begin to see the outlines of an argument for fearing that a plausible default outcome of the creation of machine superintelligence is existential catastrophe.
- Tegmark, Max (2014). "Life, Our Universe and Everything". Our mathematical universe : my quest for the ultimate nature of reality (First edition. ed.). ISBN 9780307744258.
Its owner may cede control to what Eliezer Yudkowsky terms a "Friendly AI,"...
- Tiling Agents for Self-Modifying AI, and the Löbian Obstacle
- A Comparison of Decision Algorithmson Newcomblike Problems
- Timeless Decision Theory
- Bostrom, Nick (2014). "Acquiring Values". Superintelligence: Paths, Dangers, Strategies (First edition. ed.). ISBN 0199678111.
- Scientists Fear Day Computers Become Smarter Than Humans September 12, 2007
- Business Artificial Intelligence Conference in S.J. this week San Jose Mercury News (CA) - October 24, 2008 - 3E Business
- Levels of Organization in General Intelligence
- "What is the Singularity"
- "Why Work Toward the Singularity"
- "Humanity 2.0: transhumanists believe that human nature's a phase we'll outgrow, like adolescence. Someday we'll be full-fledged adult posthumans, with physical and intellectual powers of which we can now only dream. But will progress really make perfect?"
- Singularity Challenge
- The Singularity: Humanity's Last Invention?, Martin Kaste, National Public Radio
- Singularity Institute - Recent Publications[dead link]
- "We are now the “Machine Intelligence Research Institute” (MIRI)", Luke Muehlhauser, 30 January 2013
- Smarter than thou?, San Francisco Chronicle, 12 May 2006
- Public meeting will re-examine future of artificial intelligence Real brains are gathering in San Francisco to ponder the future of artificial intelligence, September 07, 2007. Tom Abate, SFGate.com
- Silicon Valley tycoon embraces sci-fi future MSNBC Tech & Science
- "Singularity Summit: Logistics". SingularitySummit.com. Retrieved 2012-09-25.
- "July 2012 Newsletter". Singularity Institute.
- "About Us". Center for Applied Rationality.
- Stiefel, Todd; Metskas, Amanda K. (22 May 2013). "Julia Galef". The Humanist Hour. Episode 083. The Humanist. Retrieved 3 March 2015.
- Chen, Angela (1 January 2014). "More Rational Resolutions". The Wall Street Journal. Retrieved 5 March 2015.