Jump to content

LessWrong

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Asgardiator (talk | contribs) at 07:12, 15 September 2015 (Added a particularly missing wikilink.). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

LessWrong
File:LessWrong logo.png
Type of site
Internet forum, blog
Available inEnglish
Created byEliezer Yudkowsky
URLLessWrong.com
RegistrationOptional, but is required for contributing content
LaunchedFebruary 1, 2009; 15 years ago (2009-02-01)
Current statusActive
Written inPython, JavaScript, CSS (powered by Reddit source code)

LessWrong, also written as Less Wrong, is a community blog and forum focused on discussion of cognitive biases, philosophy, psychology, economics, rationality, and artificial intelligence, among other topics.[2][3] Its administrative costs are supported by the Machine Intelligence Research Institute and the Center for Applied Rationality at Berkeley, and the Future of Humanity Institute at the University of Oxford.[4]

History

Creation from blog posts for Overcoming Bias

The LessWrong FAQ describes the site's history as follows.[5] Starting November 2006, Eliezer Yudkowsky wrote many blog posts about rationality as part of Overcoming Bias, then a group blog with economist and futurist Robin Hanson. In February 2009, Yudkowsky's posts were used as the seed material to create LessWrong, the community website.

Roko's basilisk

Roko's basilisk is a thought experiment that assumes that an otherwise benevolent future artificial intelligence (AI) would torture the simulated selves of the people who did not help bring about the AI's existence. It would do so to blackmail the people who think about this idea now into helping the AI to come into being, for the purpose of ending all other causes of death and suffering (which most of LessWrong expects a recursively self-improved AI to be capable of).[6][7]

The concept was proposed in 2010 by contributor Roko in a discussion on LessWrong. Yudkowsky deleted the posts regarding it and banned further discussion of Roko's basilisk on LessWrong after it had apparently caused several contributors who took it seriously considerable anguish.[6][7]

Relation with other communities

Nexus with effective altruism community

Although LessWrong has been focused on the art of human rationality rather than on promoting altruism, its founder and early members have been interested in doing good for the world, and LessWrong has been closely associated with the effective altruism movement.[8] Effective altruism-focused charity evaluator GiveWell has benefited from outreach to LessWrong.[9][10] Effective altruist Ryan Carey wrote:

I think of LessWrong as an effective altruist group too. It's nominally a group for rationality, but almost everyone is altruistic when you get down far enough. LessWrong shares EAs' affinity for the scientific method and for evaluating consequences. In turn, EAs frequently share LW-esque views about biases and epistemology. They're sister groups in Melbourne, and I think everywhere, so I think all of these communities are tremendously important for encouraging people to do good.[11]

The interest of LessWrongers in effective altruism has been attributed to a combination of founder effects and an intrinsic connection between rationality and effective altruism.[12]

Differences from other rationality communities

LessWrong differs from other rationalist and skeptic communities and websites in that LessWrong frequently takes ideas that may seem weird or unconventional seriously, such as transhumanism, cryonics, and the importance of developing friendly artificial intelligence.[13]

Media coverage

LessWrong has been covered in depth in Business Insider[3] and Slate.[6] Core concepts from LessWrong have been referenced in columns in The Guardian.[14][15]

LessWrong has been mentioned briefly in articles related to the technological singularity and the work of the Machine Intelligence Research Institute (formerly called the Singularity Institute).[16] It has also been mentioned in articles about online monarchists and neo-reactionaries.[17]

See also

References

  1. ^ "lesswrong.com Site Overview". Alexa Internet. Retrieved 2015-02-17.
  2. ^ "Less Wrong FAQ". LessWrong.
  3. ^ a b Miller, James (July 28, 2011). "You Can Learn How To Become More Rational". Business Insider. Retrieved March 25, 2014. {{cite web}}: Italic or bold markup not allowed in: |publisher= (help)
  4. ^ "What organizations are involved with Less Wrong? (Less Wrong FAQ)". Retrieved March 25, 2014.
  5. ^ "Where did Less Wrong come from? (LessWrong FAQ)". Retrieved March 25, 2014.
  6. ^ a b c Auerbach, David (17 July 2014). "The Most Terrifying Thought Experiment of All Time". Slate. Retrieved 18 July 2014.
  7. ^ a b Love, Dylan (6 August 2014). "WARNING: Just Reading About This Thought Experiment Could Ruin Your Life". Business Insider. Retrieved 6 December 2014.
  8. ^ "Articles Tagged 'effective-altruism'". LessWrong. {{cite web}}: Italic or bold markup not allowed in: |publisher= (help)
  9. ^ Sinick, Jonah. "My Less Wrong posts about GiveWell". Retrieved March 25, 2014.
  10. ^ "GiveWell Metrics Report -- 2013 Annual Review" (PDF). Retrieved March 25, 2014.
  11. ^ Hurford, Peter (March 24, 2014). "Interview with Ryan Carey". Everyday Utilitarian. Retrieved March 25, 2014.
  12. ^ Sinick, Jonah (March 20, 2014). "To what extent does improved rationality lead to effective altruism?". LessWrong. Retrieved March 25, 2014. {{cite web}}: Italic or bold markup not allowed in: |publisher= (help)
  13. ^ "About Less Wrong". LessWrong.
  14. ^ Burkeman, Oliver (July 8, 2011). "This column will change your life: Feel the ugh and do it anyway. Can the psychological flinch mechanism be beaten?". The Guardian. Retrieved March 25, 2014. {{cite web}}: Italic or bold markup not allowed in: |publisher= (help)
  15. ^ Burkeman, Oliver (March 9, 2012). "This column will change your life: asked a tricky question? Answer an easier one. We all do it, all the time. So how can we get rid of this eccentricity?". The Guardian. Retrieved March 25, 2014. {{cite web}}: Italic or bold markup not allowed in: |publisher= (help)
  16. ^ Tiku, Natasha (July 25, 2012). "Faith, Hope, and Singularity: Entering the Matrix with New York's Futurist Set It's the end of the world as we know it, and they feel fine". BetaBeat. Retrieved March 25, 2014. {{cite web}}: Italic or bold markup not allowed in: |publisher= (help); line feed character in |title= at position 79 (help)
  17. ^ Finley, Klint (November 22, 2013). "Geeks for Monarchy: The Rise of the Neoreactionaries". TechCrunch. Retrieved March 25, 2014. {{cite web}}: Italic or bold markup not allowed in: |publisher= (help)