Eliezer Yudkowsky
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
|
Eliezer Shlomo Yudkowsky (born September 11, 1979[citation needed]) is an American blogger, writer, and advocate for Friendly artificial intelligence.
Biography
Yudkowsky, a resident of Berkeley, California has no formal education in computer science or artificial intelligence.[1] He co-founded the nonprofit Machine Intelligence Research Institute (formerly the Singularity Institute for Artificial Intelligence) in 2000 and continues to be employed there as a full-time Research Fellow.[2]
Work
Yudkowsky's interests focus on Artificial Intelligence theory for self-understanding, self-modification, and recursive self-improvement, and on artificial-intelligence architectures and decision theories for stable motivational structures (Friendly AI and Coherent Extrapolated Volition in particular).[3] Yudkowsky's most recent work is on decision theory for problems of self-modification and Newcomblike problems[clarification needed].
Yudkowsky was, along with Robin Hanson, one of the principal contributors to the blog Overcoming Bias[4][non-primary source needed] sponsored by the Future of Humanity Institute of Oxford University. In early 2009, he helped to found Less Wrong, a "community blog devoted to refining the art of human rationality".[5][non-primary source needed]
Yudkowsky contributed two chapters to Oxford philosopher Nick Bostrom's and Milan Ćirković's edited volume Global Catastrophic Risks.[6]
Yudkowsky has also written several works of science fiction and other fiction. His Harry Potter fan fiction story Harry Potter and the Methods of Rationality illustrates topics in cognitive science and rationality (The New Yorker described it as "a thousand-page online 'fanfic' text ... which recasts the original story in an attempt to explain Harry's wizardry through the scientific method"[7]), and has been reviewed by authors David Brin[8][9][10][11] and Rachel Aaron,[12][13] Robin Hanson,[14] Aaron Swartz,[15] and by programmer Eric S. Raymond.[16]
References
- ^ Singularity Rising, by James Miller, page 35
- ^ Kurzweil, Ray (2005). The Singularity Is Near. New York, US: Viking Penguin. p. 599. ISBN 0-670-03384-7.
- ^ Kurzweil, Ray (2005). The Singularity Is Near. New York, US: Viking Penguin. p. 420. ISBN 0-670-03384-7.
- ^ "Overcoming Bias: About". Robin Hanson. Retrieved 2012-02-01.
- ^ "Welcome to Less Wrong". Less Wrong. Retrieved 2012-02-01.
- ^ Bostrom, Nick; Ćirković, Milan M., eds. (2008). Global Catastrophic Risks. Oxford, UK: Oxford University Press. pp. 91–119, 308–345. ISBN 978-0-19-857050-9.
- ^ pg 54, "No Death, No Taxes: The libertarian futurism of a Silicon Valley billionaire"
- ^ David Brin (2010-06-21). "CONTRARY BRIN: A secret of college life... plus controversies and science!". Davidbrin.blogspot.com. Retrieved 2012-08-31.
- ^ "'Harry Potter' and the Key to Immortality", Daniel Snyder, The Atlantic
- ^ David Brin (2012-01-20). "CONTRARY BRIN: David Brin's List of "Greatest Science Fiction and Fantasy Tales"". Davidbrin.blogspot.com. Retrieved 2012-08-31.
- ^ http://davidbrin.blogspot.com/2013/02/science-fiction-and-our-duty-to-past.html
- ^ Authors (2012-04-02). "Rachel Aaron interview (April 2012)". Fantasybookreview.co.uk. Retrieved 2012-08-31.
- ^ "Civilian Reader: An Interview with Rachel Aaron". Civilian-reader.blogspot.com. 2011-05-04. Retrieved 2012-08-31.
- ^ Hanson, Robin (2010-10-31). "Hyper-Rational Harry". Overcoming Bias. Retrieved 2012-08-31.
- ^ Swartz, Aaron. "The 2011 Review of Books (Aaron Swartz's Raw Thought)". archive.org. Retrieved 2013-04-10.
- ^ "Harry Potter and the Methods of Rationality". Esr.ibiblio.org. 2010-07-06. Retrieved 2012-08-31.
Publications
- Creating Friendly AI[dead link ] (2001)
- Levels of Organization in General Intelligence[dead link ] (2002)
- Coherent Extrapolated Volition[dead link ] (2004)
- Timeless Decision Theory[dead link ] (2010)
- Complex Value Systems are Required to Realize Valuable Futures[dead link ] (2011)
- Tiling Agents for Self-Modifying AI, and the Löbian Obstacle (2013)
- A Comparison of Decision Algorithms on Newcomblike Problems (2013)
- Robust Cooperation in the Prisoner's Dilemma: Program Equilibrium via Provability Logic (2014)
Further reading
- Our Molecular Future: How Nanotechnology, Robotics, Genetics and Artificial Intelligence Will Transform Our World by Douglas Mulhall, 2002, p. 321.
- The Spike: How Our Lives Are Being Transformed By Rapidly Advancing Technologies by Damien Broderick, 2001, pp. 236, 265-272, 289, 321, 324, 326, 337-339, 345, 353, 370.
External links
- Personal web site
- Less Wrong - "A community blog devoted to refining the art of human rationality" co-founded by Yudkowsky.
- Harry Potter and the Methods of Rationality