Jump to content

Future of Life Institute

Coordinates: 42°22′25″N 71°06′35″W / 42.3736158°N 71.1097335°W / 42.3736158; -71.1097335
From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Citation bot (talk | contribs) at 21:01, 29 April 2020 (Alter: template type, title. Add: work, date, journal, author pars. 1-1. Removed parameters. Some additions/deletions were actually parameter name changes. | You can use this bot yourself. Report bugs here. | Activated by Amigao | Category:Futures studies | via #UCB_Category). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

42°22′25″N 71°06′35″W / 42.3736158°N 71.1097335°W / 42.3736158; -71.1097335

Future of Life Institute
FormationMarch 2014; 10 years ago (2014-03)
Founders
TypeNon-profit research institute
47-1052538
Legal statusActive
PurposeMitigation of existential risk
Location
President
Max Tegmark
WebsiteFutureOfLife.org

The Future of Life Institute (FLI) is a non-profit research institute and outreach organization in the Boston area that works to mitigate existential risks facing humanity, particularly existential risk from advanced artificial intelligence (AI). Its founders include MIT cosmologist Max Tegmark and Skype co-founder Jaan Tallinn, and its board of advisors includes entrepreneur Elon Musk.

Background

Max Tegmark professor at MIT, one of the founders and current president of the Future of Life Institute

FLI's mission is to catalyze and support research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its course in response to new technologies and challenges.[1][2] FLI is particularly focused on the potential risks to humanity from the development of human-level or superintelligent artificial general intelligence (AGI).[3]

The Institute was founded in March 2014 by MIT cosmologist Max Tegmark, Skype co-founder Jaan Tallinn, Harvard graduate student and International Mathematical Olympiad (IMO) medalist Viktoriya Krakovna, Boston University graduate student Meia Chita-Tegmark (Tegmark's wife), and UCSC physicist Anthony Aguirre. The Institute's 14-person Scientific Advisory Board comprises 12 men and 2 women, and includes computer scientists Stuart J. Russell and Francesca Rossi, biologist George Church, cosmologist Saul Perlmutter, astrophysicist Sandra Faber, theoretical physicist Frank Wilczek, entrepreneur Elon Musk, and actors and science communicators Alan Alda and Morgan Freeman (as well as cosmologist Stephen Hawking prior to his death in 2018).[4][5][6]

Events

On May 24, 2014, the Future of Life Institute held its opening event at MIT: a panel discussion on "The Future of Technology: Benefits and Risks", moderated by Alan Alda.[7][8] The panelists were synthetic biologist George Church, geneticist Ting Wu, economist Andrew McAfee, physicist and Nobel laureate Frank Wilczek and Skype co-founder Jaan Tallinn.[9][10] The discussion covered a broad range of topics from the future of bioengineering and personal genetics to autonomous weapons, AI ethics and the Singularity.[11]

On January 2-5, 2015, FLI organized "The Future of AI: Opportunities and Challenges" conference in Puerto Rico, which brought together the world's leading AI builders from academia and industry to engage with each other and experts in economics, law, and ethics. The goal was to identify promising research directions that can help maximize the future benefits of AI.[12] The Institute circulated an open letter on AI safety at the conference which was subsequently signed by Stephen Hawking, Elon Musk, and many artificial intelligence experts.[13]

On January 5-8, 2017, FLI organized the Beneficial AI conference in Asilomar, California[14], a private gathering of what The New York Times called "heavy hitters of A.I." (including Yann LeCun, Elon Musk, and Nick Bostrom).[15] The institute released a set of principles for responsible AI development that came out of the discussion at the conference, signed by Yoshua Bengio, Yann LeCun, and many other AI researchers.[16]

On January 4-7, 2019, FLI organized the Beneficial AGI conference in Puerto Rico.[17] This meeting focused on long-term questions on ensuring that Artificial General Intelligence is beneficial to humanity.[18].

Global research program

On January 15, 2015, the Future of Life Institute announced that Elon Musk had donated $10 million to fund a global AI research endeavor.[19][20][21] On January 22, 2015, the FLI released a request for proposals from researchers in academic and other non-profit institutions.[22] Unlike typical AI research, this program is focused on making AI safer or more beneficial to society, rather than just more powerful.[23] On July 1, 2015, a total of $7 million was awarded to 37 research projects.[24]

In the media

  • United States and Allies Protest U.N. Talks to Ban Nuclear Weapons in "The New York Times"[25]
  • "Is Artificial Intelligence a Threat?" in The Chronicle of Higher Education, including interviews with FLI founders Max Tegmark, Jaan Tallinn and Viktoriya Krakovna.[3]
  • "But What Would the End of Humanity Mean for Me?", an interview with Max Tegmark on the ideas behind FLI in The Atlantic.[4]
  • "Transcending Complacency on Superintelligent Machines", an op-ed in the Huffington Post by Max Tegmark, Stephen Hawking, Frank Wilczek and Stuart J. Russell on the movie Transcendence.[1]
  • "Top 23 One-liners From a Panel Discussion That Gave Me a Crazy Idea" in Diana Crow Science.[11]
  • "An Open Letter to Everyone Tricked into Fearing Artificial Intelligence", includes "Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter" by the FLI [26]
  • Michael del Castillo (15 January 2015). "Startup branding doesn't hide apocalyptic undertones of letter signed by Elon Musk". Upstart Business Journal.
  • Edd Gent (21 January 2015). "Ex Machina movie asks: is AI research in safe hands?". Engineering & Technology. Archived from the original on 26 January 2015. Retrieved 26 January 2015.
  • "Creating Artificial Intelligence" on PBS[27]

See also

References

  1. ^ a b "Transcending Complacency on Superintelligent Machines". Huffington Post. 19 April 2014. Retrieved 26 June 2014.
  2. ^ "CSER News: 'A new existential risk reduction organisation has launched in Cambridge, Massachusetts'". Centre for the Study of Existential Risk. 31 May 2014. Retrieved 19 June 2014.
  3. ^ a b Chen, Angela (11 September 2014). "Is Artificial Intelligence a Threat?". Chronicle of Higher Education. Retrieved 18 Sep 2014.
  4. ^ a b "But What Would the End of Humanity Mean for Me?". The Atlantic. 9 May 2014. Retrieved 13 April 2020.
  5. ^ "Who we are". Future of Life Institute. Retrieved 13 April 2020.
  6. ^ "Our science-fiction apocalypse: Meet the scientists trying to predict the end of the world". Salon. 5 October 2014. Retrieved 13 April 2020.
  7. ^ "The Future of Technology: Benefits and Risks". Future of Life Institute. 24 May 2014.
  8. ^ "Machine Intelligence Research Institute - June 2014 Newsletter". 2 June 2014. Retrieved 19 June 2014.
  9. ^ "FHI News: 'Future of Life Institute hosts opening event at MIT'". Future of Humanity Institute. 20 May 2014. Retrieved 19 June 2014.
  10. ^ "The Future of Technology: Benefits and Risks". Personal Genetics Education Project. 9 May 2014. Retrieved 19 June 2014.
  11. ^ a b "Top 23 One-liners From a Panel Discussion That Gave Me a Crazy Idea". Diana Crow Science. 29 May 2014. Retrieved 11 June 2014.
  12. ^ "AI safety conference in Puerto Rico". Future of Life Institute. Retrieved 19 January 2015.
  13. ^ "Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter". Future of Life Institute.
  14. ^ "Beneficial AI 2017". Future of Life Institute.
  15. ^ Metz, Cade (June 9, 2018). "Mark Zuckerberg, Elon Musk and the Feud Over Killer Robots". NYT. Retrieved June 10, 2018. The private gathering at the Asilomar Hotel was organized by the Future of Life Institute, a think tank built to discuss the existential risks of A.I. and other technologies.
  16. ^ "Asilomar AI Principles". Future of Life Institute.
  17. ^ "Beneficial AGI 2019". Future of Life Institute.
  18. ^ "CSER at the Beneficial AGI 2019 Conference". Center for the Study of Existential Risk.
  19. ^ "Elon Musk donates $10M to keep AI beneficial". Future of Life Institute. 15 January 2015.
  20. ^ "Elon Musk donates $10M to Artificial Intelligence research". SlashGear. 15 January 2015.
  21. ^ "Elon Musk is Donating $10M of his own Money to Artificial Intelligence Research". Fast Company. 15 January 2015.
  22. ^ "An International Request for Proposals - Timeline". Future of Life Institute. 22 January 2015.
  23. ^ "2015 INTERNATIONAL GRANTS COMPETITION". Future of Life Institute.
  24. ^ "New International Grants Program Jump-Starts Research to Ensure AI Remains Beneficial". Future of Life Institute.
  25. ^ Somini Sengupta; Rick Gladstone (March 27, 2017). "United States and Allies Protest U.N. Talks to Ban Nuclear Weapons". New York Times.
  26. ^ "An Open Letter to Everyone Tricked into Fearing Artificial Intelligence". Popular Science. 14 January 2015. Retrieved 19 January 2015.
  27. ^ "Creating Artificial Intelligence". PBS. 17 April 2015.