Jump to content

Nick Bostrom: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
persondata has been deprecated, see Wikipedia:Persondata
Rescuing 2 sources, flagging 0 as dead, and archiving 62 sources. (Peachy 2.0 (alpha 8))
Line 54: Line 54:
Bostrom is favorable towards "human enhancement", or "self-improvement and human perfectibility through the ethical application of science",<ref name="Guardian2006" /><ref>{{cite journal |first=Nick |last=Bostrom|title=Human Genetic Enhancements: A Transhumanist Perspective |journal=Journal of Value Inquiry |volume=37 |issue=4 |pages=493–506 |year=2003 |doi=10.1023/B:INQU.0000019037.67783.d5 |url=http://cyber.law.harvard.edu/cyberlaw2005/sites/cyberlaw2005/images/Transhumanist_Perspective.pdf |format=PDF}}</ref> as well as a critic of bio-conservative views.<ref>{{cite journal |first=Nick |last=Bostrom |title=In Defence of Posthuman Dignity |journal=Bioethics |volume=19 |issue=3 |pages=202–214 |year=2005 |doi=10.1111/j.1467-8519.2005.00437.x |pmid=16167401}}</ref> With philosopher Toby Ord, he proposed the [[reversal test]]. Given humans’ irrational status quo bias, how can one distinguish between valid criticisms of proposed changes in a human trait and criticisms merely motivated by resistance to change? The reversal test attempts to do this by asking whether it would be a good thing if the trait was altered in the opposite direction.<ref>{{cite journal |first1=Nick |last1=Bostrom |first2=Toby |last2=Ord |title=The reversal test: eliminating status quo bias in applied ethics |journal=Ethics |volume=116 |issue=4 |pages=656–679 |year=2006 |url=http://www.nickbostrom.com/ethics/statusquo.pdf |format=PDF |doi=10.1086/505233}}</ref>
Bostrom is favorable towards "human enhancement", or "self-improvement and human perfectibility through the ethical application of science",<ref name="Guardian2006" /><ref>{{cite journal |first=Nick |last=Bostrom|title=Human Genetic Enhancements: A Transhumanist Perspective |journal=Journal of Value Inquiry |volume=37 |issue=4 |pages=493–506 |year=2003 |doi=10.1023/B:INQU.0000019037.67783.d5 |url=http://cyber.law.harvard.edu/cyberlaw2005/sites/cyberlaw2005/images/Transhumanist_Perspective.pdf |format=PDF}}</ref> as well as a critic of bio-conservative views.<ref>{{cite journal |first=Nick |last=Bostrom |title=In Defence of Posthuman Dignity |journal=Bioethics |volume=19 |issue=3 |pages=202–214 |year=2005 |doi=10.1111/j.1467-8519.2005.00437.x |pmid=16167401}}</ref> With philosopher Toby Ord, he proposed the [[reversal test]]. Given humans’ irrational status quo bias, how can one distinguish between valid criticisms of proposed changes in a human trait and criticisms merely motivated by resistance to change? The reversal test attempts to do this by asking whether it would be a good thing if the trait was altered in the opposite direction.<ref>{{cite journal |first1=Nick |last1=Bostrom |first2=Toby |last2=Ord |title=The reversal test: eliminating status quo bias in applied ethics |journal=Ethics |volume=116 |issue=4 |pages=656–679 |year=2006 |url=http://www.nickbostrom.com/ethics/statusquo.pdf |format=PDF |doi=10.1086/505233}}</ref>


In 1998, Bostrom co-founded (with [[David Pearce (philosopher)|David Pearce]]) the World Transhumanist Association<ref name="Guardian2006">{{cite news |first=John |last=Sutherland |title=The ideas interview: Nick Bostrom; John Sutherland meets a transhumanist who wrestles with the ethics of technologically enhanced human beings |newspaper=[[The Guardian]] | date=9 May 2006 |url=http://www.guardian.co.uk/science/2006/may/09/academicexperts.genetics}}</ref> (which has since changed its name to [[Humanity+]]). In 2004, he co-founded (with [[James Hughes (sociologist)|James Hughes]]) the [[Institute for Ethics and Emerging Technologies]], although he is no longer involved in either of these organisations. Bostrom is the 2009 recipient of the [[Eugene R. Gannon Award for the Continued Pursuit of Human Advancement]] <ref>[http://gannonaward.org/The_Gannon_Award/The_Gannon_Group.html] {{dead link|date=October 2014}}</ref><ref>[http://www.fhi.ox.ac.uk/archive/2009/eugene_r._gannon_award_for_the_continued_pursuit_of_human_advancement] {{dead link|date=October 2014}}</ref> and was named in [[Foreign Policy]]'s 2009 list of top global thinkers "for accepting no limits on human potential." <ref>{{cite web |title=73. Nick Bostrom |date=December 2009 |work=The FP Top 100 Global Thinkers |publisher=Foreign Policy |url=http://www.foreignpolicy.com/articles/2009/11/30/the_fp_top_100_global_thinkers?page=0,30}}</ref>
In 1998, Bostrom co-founded (with [[David Pearce (philosopher)|David Pearce]]) the World Transhumanist Association<ref name="Guardian2006">{{cite news |first=John |last=Sutherland |title=The ideas interview: Nick Bostrom; John Sutherland meets a transhumanist who wrestles with the ethics of technologically enhanced human beings |newspaper=[[The Guardian]] | date=9 May 2006 |url=http://www.guardian.co.uk/science/2006/may/09/academicexperts.genetics}}</ref> (which has since changed its name to [[Humanity+]]). In 2004, he co-founded (with [[James Hughes (sociologist)|James Hughes]]) the [[Institute for Ethics and Emerging Technologies]], although he is no longer involved in either of these organisations. Bostrom is the 2009 recipient of the [[Eugene R. Gannon Award for the Continued Pursuit of Human Advancement]] <ref>[http://gannonaward.org/The_Gannon_Award/The_Gannon_Group.html] {{wayback|url=http://gannonaward.org/The_Gannon_Award/The_Gannon_Group.html |date=20091220070749 |df=y }}</ref><ref>[http://www.fhi.ox.ac.uk/archive/2009/eugene_r._gannon_award_for_the_continued_pursuit_of_human_advancement] {{wayback|url=http://www.fhi.ox.ac.uk/archive/2009/eugene_r._gannon_award_for_the_continued_pursuit_of_human_advancement |date=20091101061716 |df=y }}</ref> and was named in [[Foreign Policy]]'s 2009 list of top global thinkers "for accepting no limits on human potential." <ref>{{cite web |title=73. Nick Bostrom |date=December 2009 |work=The FP Top 100 Global Thinkers |publisher=Foreign Policy |url=http://www.foreignpolicy.com/articles/2009/11/30/the_fp_top_100_global_thinkers?page=0,30}}</ref>


===Technology strategy===
===Technology strategy===

Revision as of 22:58, 18 October 2015

Nick Bostrom
File:Professor Nick Bostrom5.jpg
Bostrom in 2012
Born
Niklas Boström

(1973-03-10) 10 March 1973 (age 51)
Helsingborg, Sweden
Alma mater
OccupationPhilosopher
EmployerUniversity of Oxford
Known forExistential risk, anthropic bias, the reversal test, the simulation hypothesis, ethical consequentialism
Awards
Scientific career
ThesisObservational Selection Effects and Probability
WebsiteNickBostrom.com

Nick Bostrom (Swedish: Niklas Boström; born 10 March 1973)[1] is a Swedish philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, the reversal test, and consequentialism. He holds a PhD from the London School of Economics (2000). In 2011, he founded the Oxford Martin Programme on the Impacts of Future Technology,[2] and he is currently the founding director of the Future of Humanity Institute[3] at Oxford University.

He is the author of over 200 publications,[4] including Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller [5] and Anthropic Bias.[6] In 2009, he was included in Foreign Policy's Top 100 Global Thinkers list.[7] Bostrom’s work on superintelligence has influenced both Elon Musk’s and Bill Gates’s concern for the existential risks facing humanity over the coming century.[8][9][10]

Early life and education

Bostrom was born in 1973 in Helsingborg, Sweden.[11] He holds a B.A. in philosophy, mathematics, mathematical logic, and artificial intelligence from the University of Gothenburg and master's degrees in philosophy and physics, and computational neuroscience from Stockholm University and King's College London, respectively. In 2000, he was awarded a PhD in philosophy from the London School of Economics. He held a teaching position at Yale University (2000–2002), and he was a British Academy Postdoctoral Fellow at the University of Oxford (2002–2005).[6][12]

Philosophy

Existential risk

An important aspect of Bostrom’s research concerns the future of humanity and long-term outcomes.[13][14] He introduced the concept of an existential risk, which he defines as one in which an "adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential." In the 2008 volume "Global Catastrophic Risks", editors Bostrom and Milan Cirkovic characterize the relation between existential risk and the broader class of global catastrophic risks, and link existential risk to observer selection effects[15] and the Fermi paradox.[16] In a 2013 paper in the journal Global Policy, Bostrom offers a taxonomy of existential risk and proposes a reconceptualization of sustainability in dynamic terms, as a developmental trajectory that minimizes existential risk.[17]

The philosopher Derek Parfit argued for the importance of ensuring the survival of humanity, due to the value of a potentially large number of future generations.[18] Similarly, Bostrom has said that, from a consequentialist perspective, even small reductions in the cumulative amount of existential risk that humanity will face are extremely valuable, to the point where the traditional utilitarian imperative—to maximize expected utility—can be simplified to the Maxipok principle: maximize the probability of an OK outcome (where an OK outcome is any that avoids existential catastrophe).[19][20]

In 2005, Bostrom founded the Future of Humanity Institute, which researches the far future of human civilization. He is also an adviser to the Centre for the Study of Existential Risk.[14]

Superintelligence

In his 2014 book Superintelligence: Paths, Dangers, Strategies, Bostrom reasons with "cognitive performance greatly [exceeding] that of humans in virtually all domains of interest", Superintelligent agents could promise substantial societal benefits and pose a significant existential risk of artificial general intelligence. Therefore, it is crucial that we approach this area with caution, and take active steps to mitigate the risks we face. In January 2015, Bostrom joined Stephen Hawking, Max Tegmark, Elon Musk, Lord Martin Rees, Jaan Tallinn among others, in signing the Future of Life Institute's open letter warning of the potential dangers associated with artificial intelligence. The signatories "...believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today."[21][22]

Anthropic reasoning

Bostrom has published numerous articles on anthropic reasoning, as well as the book Anthropic Bias: Observation Selection Effects in Science and Philosophy. In the book, he criticizes previous formulations of the anthropic principle, including those of Brandon Carter, John Leslie, John Barrow, and Frank Tipler.[23]

Bostrom believes that the mishandling of indexical information is a common flaw in many areas of inquiry (including cosmology, philosophy, evolution theory, game theory, and quantum physics). He argues that a theory of anthropics is needed to deal with these. He introduced the Self-Sampling Assumption (SSA) and the Self-Indication Assumption (SIA) and showed how they lead to different conclusions in a number of cases. He pointed out that each is affected by paradoxes or counterintuitive implications in certain thought experiments (the SSA in e.g. the Doomsday argument; the SIA in the Presumptuous Philosopher thought experiment). He suggested that a way forward may involve extending SSA into the Strong Self-Sampling Assumption (SSSA), which replaces "observers" in the SSA definition by "observer-moments". This could allow for the reference class to be relativized (and he derived an expression for this in the “observation equation”).

In later work, he has described the phenomenon of anthropic shadow, an observation selection effect that prevents observers from observing certain kinds of catastrophes in their recent geological and evolutionary past.[24] Catastrophe types that lie in the anthropic shadow are likely to be underestimated unless statistical corrections are made.

Ethics of human enhancement

Bostrom is favorable towards "human enhancement", or "self-improvement and human perfectibility through the ethical application of science",[25][26] as well as a critic of bio-conservative views.[27] With philosopher Toby Ord, he proposed the reversal test. Given humans’ irrational status quo bias, how can one distinguish between valid criticisms of proposed changes in a human trait and criticisms merely motivated by resistance to change? The reversal test attempts to do this by asking whether it would be a good thing if the trait was altered in the opposite direction.[28]

In 1998, Bostrom co-founded (with David Pearce) the World Transhumanist Association[25] (which has since changed its name to Humanity+). In 2004, he co-founded (with James Hughes) the Institute for Ethics and Emerging Technologies, although he is no longer involved in either of these organisations. Bostrom is the 2009 recipient of the Eugene R. Gannon Award for the Continued Pursuit of Human Advancement [29][30] and was named in Foreign Policy's 2009 list of top global thinkers "for accepting no limits on human potential." [31]

Technology strategy

He has suggested that technology policy aimed at reducing existential risk should seek to influence the order in which various technological capabilities are attained, proposing the Principle of Differential Technological Development. This principle states that we ought to retard the development of dangerous technologies, particularly ones that raise the level of existential risk, and accelerate the development of beneficial technologies, particularly those that protect against the existential risks posed by nature or by other technologies.

Simulation argument

Bostrom’s simulation argument posits that at least one of the following statements is very likely to be true:

  1. The fraction of human-level civilizations that reach a posthuman stage is very close to zero;
  2. The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero;
  3. The fraction of all people with our kind of experiences that are living in a simulation is very close to one.

To estimate the probability of at least one of those propositions holding, he offers the following equation:[32]

where:

is the fraction of all human civilizations that will reach a technological capability to program reality simulators.
is the average number of ancestor-simulations run by the civilizations mentioned by .
is the average number of individuals who have lived in a civilization before it was able to perform reality simulation.
is the fraction of all humans who live in virtual realities.

N can be calculated by multiplying the fraction of civilizations interested in performing such simulations () by the number of simulations run by such civilizations ():

Thus the formula becomes:

Because post-human computing power will be such a large value, at least one of the following three approximations will be true:

≈ 0
≈ 0
≈ 1

Policy and consulations

Bostrom has provided policy advice and consulted for an extensive range of governments and organisations. He gave evidence to the House of Lords, Select Committee on Digital Skills, with Anders Sandberg, he was a consultant to the UK Government Office for Science (GOSE) and Foresight for “The Future of Human Identity” report and an Expert Member for World Economic Forum’s Agenda Council for Catastrophic Risks. He is an advisory board member for the Machine Intelligence Research Institute, Future of Life Institute, Foundational Questions Institute In Physics and Cosmology and an external advisor for the Cambridge Existential Risks Project.[33]

See also

Select bibliography

Books

  • Superintelligence: Paths, Dangers, Strategies, ISBN 978-0-19-967811-2
  • Anthropic Bias: Observation Selection Effects in Science and Philosophy, ISBN 0-415-93858-9
  • Global Catastrophic Risks, edited by Nick Bostrom, ISBN 978-0-19-857050-9
  • Human Enhancement, edited by Julian Savulescu and Nick Bostrom, ISBN 0-19-929972-2

Journal articles

References

  1. ^ "nickbostrom.com". Nickbostrom.com. Retrieved 16 October 2014.
  2. ^ "Professor Nick Bostrom : People". Oxford Martin School. Retrieved 16 October 2014.
  3. ^ "Future of Humanity Institute – University of Oxford". Fhi.ox.ac.uk. Retrieved 16 October 2014.
  4. ^ "The viability of Transcendence: the science behind the film". OUPblog. Retrieved 16 October 2014.
  5. ^ "New York Times". New York Times. Retrieved 19 February 2015.
  6. ^ a b "Oxford University Press". Oxford University Press. Retrieved 4 March 2015.
  7. ^ Frankel, Rebecca. "The FP Top 100 Global Thinkers". Foreign Policy. Retrieved 5 September 2015.
  8. ^ "Forbes". Forbes. Retrieved 19 February 2015.
  9. ^ "The Fiscal Times". The Fiscal Times. Retrieved 19 February 2015.
  10. ^ "The New York Times Blog". The New York Times. Retrieved 4 March 2015.
  11. ^ "Nick Bostrom home page". Retrieved 22 July 2014.
  12. ^ "Nick Bostrom : CV" (PDF). Nickbostrom.com. Retrieved 16 October 2014.
  13. ^ Bostrom, Nick (March 2002). "Existential Risks". Journal of Evolution and Technology. 9.
  14. ^ a b Andersen, Ross. "Omens". Aeon. Aeon Media Ltd. Retrieved 5 September 2015.
  15. ^ Tegmark, Max; Bostrom, Nick (2005). "Astrophysics: is a doomsday catastrophe likely?" (PDF). Nature. 438 (7069): 754. doi:10.1038/438754a. PMID 16341005.
  16. ^ Bostrom, Nick (May–June 2008). "Where are they? Why I Hope the Search for Extraterrestrial Life Finds Nothing" (PDF). MIT Technology Review: 72–77.
  17. ^ "Existential Risk Prevention as Global Priority" (PDF). Nickbostrom.com. Retrieved 16 October 2014.
  18. ^ Parfit, Derek (1984). Reasons and Persons. Oxford, England: Oxford University Press. pp. 453–454. ISBN 019824908X.
  19. ^ "Astronomical Waste: The Opportunity Cost of Delayed Technological Development". Nickbostrom.com. Retrieved 16 October 2014.
  20. ^ "Existential Risks: Analyzing Human Extinction Scenarios". Nickbostrom.com. Retrieved 16 October 2014.
  21. ^ "The Future of Life Institute Open Letter". The Future of Life Institute. Retrieved 4 March 2015.
  22. ^ "Scientists and investors warn on AI". The Financial Times. Retrieved 4 March 2015.
  23. ^ Bostrom, Nick (2002). Anthropic Bias: Observation Selection Effects in Science and Philosophy (PDF). New York: Routledge. pp. 44–58. ISBN 0-415-93858-9. Retrieved 22 July 2014.
  24. ^ "Anthropic Shadow: Observation Selection Effects and Human Extinction Risks" (PDF). Nickbostrom.com. Retrieved 16 October 2014.
  25. ^ a b Sutherland, John (9 May 2006). "The ideas interview: Nick Bostrom; John Sutherland meets a transhumanist who wrestles with the ethics of technologically enhanced human beings". The Guardian.
  26. ^ Bostrom, Nick (2003). "Human Genetic Enhancements: A Transhumanist Perspective" (PDF). Journal of Value Inquiry. 37 (4): 493–506. doi:10.1023/B:INQU.0000019037.67783.d5.
  27. ^ Bostrom, Nick (2005). "In Defence of Posthuman Dignity". Bioethics. 19 (3): 202–214. doi:10.1111/j.1467-8519.2005.00437.x. PMID 16167401.
  28. ^ Bostrom, Nick; Ord, Toby (2006). "The reversal test: eliminating status quo bias in applied ethics" (PDF). Ethics. 116 (4): 656–679. doi:10.1086/505233.
  29. ^ [1] Archived 2009-12-20 at the Wayback Machine
  30. ^ [2] Archived 2009-11-01 at the Wayback Machine
  31. ^ "73. Nick Bostrom". The FP Top 100 Global Thinkers. Foreign Policy. December 2009.
  32. ^ Bostrom, Nick (19 January 2010). "Are You Living in a Computer Simulation?".
  33. ^ "nickbostrom.com". Nickbostrom.com. Retrieved 19 February 2015.