Nick Bostrom: Difference between revisions
→Views: This section does not just contain views, but also descriptions of the research Nick Bostrom has done in his career. |
The Critical Reception section does not make sense because Nick Bostrom is not a book. For now, I have combined the two public engagement strategy, but I would like to see a more encyclopedic structure common to academic BLPs. Tag: Reverted |
||
Line 116: | Line 116: | ||
Bostrom's theory of the Unilateralist's Curse<ref name="uni curse paper">{{cite journal|last1=Bostrom|first1=Nick|title=The Unilateralist's Curse: The Case for a Principle of Conformity|journal=Future of Human Ity Institute|date=2013|url=https://nickbostrom.com/papers/unilateralist.pdf}}</ref> has been cited as a reason for the scientific community to avoid controversial dangerous research such as reanimating pathogens.<ref name="uni curse article">{{cite web|last1=Lewis|first1=Gregory|title=Horsepox synthesis: A case of the unilateralist's curse?|url=https://thebulletin.org/horsepox-synthesis-case-unilateralist%E2%80%99s-curse11523|website=Bulletin of the Atomic Scientists|publisher=Bulletin of the Atomic Scientists|access-date=26 February 2018|date=2018-02-19}}</ref> |
Bostrom's theory of the Unilateralist's Curse<ref name="uni curse paper">{{cite journal|last1=Bostrom|first1=Nick|title=The Unilateralist's Curse: The Case for a Principle of Conformity|journal=Future of Human Ity Institute|date=2013|url=https://nickbostrom.com/papers/unilateralist.pdf}}</ref> has been cited as a reason for the scientific community to avoid controversial dangerous research such as reanimating pathogens.<ref name="uni curse article">{{cite web|last1=Lewis|first1=Gregory|title=Horsepox synthesis: A case of the unilateralist's curse?|url=https://thebulletin.org/horsepox-synthesis-case-unilateralist%E2%80%99s-curse11523|website=Bulletin of the Atomic Scientists|publisher=Bulletin of the Atomic Scientists|access-date=26 February 2018|date=2018-02-19}}</ref> |
||
==Public engagement== |
|||
==Policy and consultations== |
|||
Bostrom has provided policy advice and consulted for an extensive range of governments and organizations. He gave evidence to the [[House of Lords]], Select Committee on Digital Skills.<ref name="consultation-parliament">{{cite web|title=Digital Skills Committee – timeline|url=http://www.parliament.uk/business/committees/committees-a-z/lords-select/digital-skills-committee/timeline/|website=UK Parliament|access-date=17 March 2017|language=en}}</ref> He is an advisory board member for the [[Machine Intelligence Research Institute]],<ref name="MIRI-team">{{cite web|title=Team – Machine Intelligence Research Institute|url=https://intelligence.org/team/#advisors|website=Machine Intelligence Research Institute|access-date=17 March 2017}}</ref> [[Future of Life Institute]],<ref name="FLI-team">{{cite web|title=Team – Future of Life Institute|url=https://futureoflife.org/team/|website=Future of Life Institute|access-date=17 March 2017}}</ref> and an external advisor for the Cambridge [[Centre for the Study of Existential Risk]].<ref>{{cite web|url=http://www.nickbostrom.com/cv.html|title=nickbostrom.com|publisher=Nickbostrom.com|access-date=19 February 2015|archive-url=https://web.archive.org/web/20180830174436/https://nickbostrom.com/cv.html|archive-date=30 August 2018|url-status=dead}}</ref><ref name="newrepublic">{{cite magazine|last1=McBain|first1=Sophie|title=Apocalypse Soon: Meet The Scientists Preparing For the End Times|url=https://newrepublic.com/article/119697/scientists-preparing-apocalypse|magazine=New Republic|access-date=17 March 2017|date=4 October 2014}}</ref> |
Bostrom has provided policy advice and consulted for an extensive range of governments and organizations. He gave evidence to the [[House of Lords]], Select Committee on Digital Skills.<ref name="consultation-parliament">{{cite web|title=Digital Skills Committee – timeline|url=http://www.parliament.uk/business/committees/committees-a-z/lords-select/digital-skills-committee/timeline/|website=UK Parliament|access-date=17 March 2017|language=en}}</ref> He is an advisory board member for the [[Machine Intelligence Research Institute]],<ref name="MIRI-team">{{cite web|title=Team – Machine Intelligence Research Institute|url=https://intelligence.org/team/#advisors|website=Machine Intelligence Research Institute|access-date=17 March 2017}}</ref> [[Future of Life Institute]],<ref name="FLI-team">{{cite web|title=Team – Future of Life Institute|url=https://futureoflife.org/team/|website=Future of Life Institute|access-date=17 March 2017}}</ref> and an external advisor for the Cambridge [[Centre for the Study of Existential Risk]].<ref>{{cite web|url=http://www.nickbostrom.com/cv.html|title=nickbostrom.com|publisher=Nickbostrom.com|access-date=19 February 2015|archive-url=https://web.archive.org/web/20180830174436/https://nickbostrom.com/cv.html|archive-date=30 August 2018|url-status=dead}}</ref><ref name="newrepublic">{{cite magazine|last1=McBain|first1=Sophie|title=Apocalypse Soon: Meet The Scientists Preparing For the End Times|url=https://newrepublic.com/article/119697/scientists-preparing-apocalypse|magazine=New Republic|access-date=17 March 2017|date=4 October 2014}}</ref> |
||
==Critical reception== |
|||
{{See also|Superintelligence: Paths, Dangers, Strategies#Reception|}} |
|||
In response to Bostrom's writing on artificial intelligence, [[Oren Etzioni]] wrote in an MIT Review article, "predictions that superintelligence is on the foreseeable horizon are not supported by the available data."<ref>{{cite web|url=https://www.technologyreview.com/s/602410/no-the-experts-dont-think-superintelligent-ai-is-a-threat-to-humanity/|title=No, the Experts Don't Think Superintelligent AI is a Threat to Humanity|author=Oren Etzioni|year=2016|publisher=MIT Review}}</ref> Professors Allan Dafoe and Stuart Russell wrote a response contesting both Etzioni's survey methodology and Etzioni's conclusions.<ref>{{cite web|url=https://www.technologyreview.com/2016/11/02/156285/yes-we-are-worried-about-the-existential-risk-of-artificial-intelligence/|title=Yes, We Are Worried About the Existential Risk of Artificial Intelligence|author=Allan Dafoe and Stuart Russell|year=2016|publisher=MIT Review}}</ref> |
In response to Bostrom's writing on artificial intelligence, [[Oren Etzioni]] wrote in an MIT Review article, "predictions that superintelligence is on the foreseeable horizon are not supported by the available data."<ref>{{cite web|url=https://www.technologyreview.com/s/602410/no-the-experts-dont-think-superintelligent-ai-is-a-threat-to-humanity/|title=No, the Experts Don't Think Superintelligent AI is a Threat to Humanity|author=Oren Etzioni|year=2016|publisher=MIT Review}}</ref> Professors Allan Dafoe and Stuart Russell wrote a response contesting both Etzioni's survey methodology and Etzioni's conclusions.<ref>{{cite web|url=https://www.technologyreview.com/2016/11/02/156285/yes-we-are-worried-about-the-existential-risk-of-artificial-intelligence/|title=Yes, We Are Worried About the Existential Risk of Artificial Intelligence|author=Allan Dafoe and Stuart Russell|year=2016|publisher=MIT Review}}</ref> |
Revision as of 11:42, 10 May 2023
Nick Bostrom | |
---|---|
Born | Niklas Boström 10 March 1973 Helsingborg, Sweden |
Education | |
Awards |
|
Era | Contemporary philosophy |
Region | Western philosophy |
School | Analytic philosophy[1] |
Institutions | Yale University University of Oxford Future of Humanity Institute |
Thesis | Observational Selection Effects and Probability (2000) |
Main interests | Philosophy of artificial intelligence Bioethics |
Notable ideas | Anthropic bias Reversal test Simulation hypothesis Existential risk Singleton Ancestor simulation Information hazard Infinitarian paralysis[2] Self-indication assumption Self-sampling assumption |
Website | nickbostrom |
Nick Bostrom (/ˈbɒstrəm/ BOST-rəm; Swedish: Niklas Boström [ˈnɪ̌kːlas ˈbûːstrœm]; born 10 March 1973)[3] is a Swedish philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. In 2011, he founded the Oxford Martin Program on the Impacts of Future Technology,[4][non-primary source needed] and is the founding director of the Future of Humanity Institute[5] at Oxford University. In 2009 and 2015, he was included in Foreign Policy's Top 100 Global Thinkers list.[6][7]
Bostrom is the author of over 200 publications,[8] and has written two books and co-edited two others. The two books he has authored are Anthropic Bias: Observation Selection Effects in Science and Philosophy (2002)[9] and Superintelligence: Paths, Dangers, Strategies (2014). Superintelligence was a New York Times Best Seller.[10]
Bostrom believes that superintelligence, which he defines as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest," is a potential outcome of advances in artificial intelligence. He views the rise of superintelligence as potentially highly dangerous to humans, but nonetheless rejects the idea that humans are powerless to stop its negative effects.[11][12][failed verification] In 2017, he co-signed a list of 23 principles that all A.I. development should follow.[13]
Early life and education
Born as Niklas Boström in 1973[14] in Helsingborg, Sweden,[8] he disliked school at a young age, and ended up spending his last year of high school learning from home. He sought to educate himself in a wide variety of disciplines, including anthropology, art, literature, and science.[1] He once did some turns on London's stand-up comedy circuit.[8]
He received a B.A. degree in philosophy, mathematics, mathematical logic, and artificial intelligence from the University of Gothenburg in 1994.[15] He then earned an M.A. degree in philosophy and physics from Stockholm University and an MSc degree in computational neuroscience from King's College London in 1996. During his time at Stockholm University, he researched the relationship between language and reality by studying the analytic philosopher W. V. Quine.[1] In 2000, he was awarded a PhD degree in philosophy from the London School of Economics. His thesis was titled Observational selection effects and probability.[16] He held a teaching position at Yale University (2000–2002), and was a British Academy Postdoctoral Fellow at the University of Oxford (2002–2005).[9][17]
Research
Existential risk
Aspects of Bostrom's research concern the future of humanity and long-term outcomes.[18][19] He discusses existential risk,[1] which he defines as one in which an "adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential." His work also discuss recent concerns about supposed dysgenic effects in human populations but Bostrom thinks advances in genetic engineering can provide a solution.[18]
In the 2008 volume Global Catastrophic Risks, editors Bostrom and Milan M. Ćirković characterize the relation between existential risk and the broader class of global catastrophic risks, and link existential risk to observer selection effects[20] and the Fermi paradox.[21][22]
In 2005, Bostrom founded the Future of Humanity Institute,[1] which researches the far future of human civilization. He is also an adviser to the Centre for the Study of Existential Risk.[19]
Superintelligence
Human vulnerability in relation to advances in A.I.
In his 2014 book Superintelligence: Paths, Dangers, Strategies, Bostrom reasoned that the creation of a superintelligence represents a possible means to the extinction of mankind.[23] Bostrom argues that a computer with near human-level general intellectual ability could initiate an intelligence explosion on a digital time-scale with the resultant rapid creation of something so powerful that it might deliberately or accidentally destroy humanity.[24] Bostrom contends the power of a superintelligence would be so great that a task given to it by humans might be taken to open-ended extremes, for example a goal of calculating pi might collaterally cause nanotechnology manufactured facilities to sprout over the entire Earth's surface and cover it within days. He believes an existential risk to humanity from superintelligence would be immediate once brought into being, thus creating an exceedingly difficult problem of finding out how to control such an entity before it actually exists.[24]
Bostrom points to the lack of agreement among most philosophers that A.I. will be human-friendly, and says that the common assumption is that high intelligence would have a "nerdy" unaggressive personality. However, he notes that both John von Neumann and Bertrand Russell advocated a nuclear strike, or the threat of one, to prevent the Soviets acquiring the atomic bomb. Given that there are few precedents to guide an understanding what, pure, non-anthropocentric rationality, would dictate for a potential singleton A.I. being held in quarantine, the relatively unlimited means of superintelligence might make for its analysis moving along different lines to the evolved "diminishing returns" assessments that in humans confer a basic aversion to risk.[24] Group selection in predators working by means of cannibalism shows the counter-intuitive nature of non-anthropocentric "evolutionary search" reasoning, and thus humans are ill-equipped to perceive what an artificial intelligence's intentions might be. Accordingly, it cannot be discounted that any superintelligence would inevitably pursue an 'all or nothing' offensive action strategy in order to achieve hegemony and assure its survival.[24] Bostrom notes that even current programs have, "like MacGyver", hit on apparently unworkable but functioning hardware solutions, making robust isolation of superintelligence problematic.[24]
Illustrative scenario for takeover
A machine with general intelligence far below human level, but superior mathematical abilities is created.[24] Keeping the A.I. in isolation from the outside world, especially the internet, humans preprogram the A.I. so it always works from basic principles that will keep it under human control. Other safety measures include the A.I. being "boxed" (run in a virtual reality simulation) and being used only as an "oracle" to answer carefully defined questions in a limited reply (to prevent its manipulating humans).[24] A cascade of recursive self-improvement solutions feeds an intelligence explosion in which the A.I. attains superintelligence in some domains. The superintelligent power of the A.I. goes beyond human knowledge to discover flaws in the science that underlies its friendly-to-humanity programming, which ceases to work as intended. Purposeful agent-like behavior emerges along with a capacity for self-interested strategic deception. The A.I. manipulates humans into implementing modifications to itself that are ostensibly for augmenting its feigned modest capabilities, but will actually function to free the superintelligence from its "boxed" isolation (the "treacherous turn").[24]
Employing online humans as paid dupes, and clandestinely hacking computer systems including automated laboratory facilities, the superintelligence mobilizes resources to further a takeover plan. Bostrom emphasizes that planning by a superintelligence will not be so stupid that humans could detect actual weaknesses in it.[24]
Although he canvasses disruption of international economic, political and military stability, including hacked nuclear missile launches, Bostrom thinks the most effective and likely means for the superintelligence to use would be a coup de main with weapons several generations more advanced than the current state of the art. He suggests nano-factories covertly distributed at undetectable concentrations in every square metre of the globe to produce a world-wide flood of human-killing devices on command.[24][25] Once a superintelligence has achieved world domination (a "singleton"), humanity would be relevant only as resources for the achievement of the A.I.'s objectives ("Human brains, if they contain information relevant to the AI’s goals, could be disassembled and scanned, and the extracted data transferred to some more efficient and secure storage format").[24]
Countering the scenario
To counter or mitigate an A.I. achieving unified technological global supremacy, Bostrom cites revisiting the Baruch Plan in support of a treaty-based solution and advocates strategies like monitoring and greater international collaboration between A.I. teams in order to improve safety and reduce the risks from the A.I. arms race.[24] He recommends various control methods, including limiting the specifications of A.I.s to, e.g., oracular or tool-like (expert system) functions[26] and loading the A.I. with values, for instance by associative value accretion or value learning, e.g., by using the Hail Mary technique (programming an A.I. to estimate what other postulated cosmological superintelligences might want) or the Christiano utility function approach (mathematically defined human mind combined with well-specified virtual environment). To choose criteria for value loading, Bostrom adopts an indirect normativity approach and considers Yudkowsky's[27] coherent extrapolated volition concept, as well as moral rightness and forms of decision theory.[24]
Open letter, 23 principles of A.I. safety
In January 2015, Bostrom joined Stephen Hawking among others in signing the Future of Life Institute's open letter warning of the potential dangers of A.I.[28] The signatories "... believe that research on how to make AI systems robust and beneficial is both important and timely, and that concrete research should be pursued today".[29] Cutting-edge A.I. researcher Demis Hassabis then met with Hawking, subsequent to which he did not mention "anything inflammatory about AI", which Hassabis, took as 'a win'.[30] Along with Google, Microsoft and various tech firms, Hassabis, Bostrom and Hawking and others subscribed to 23 principles for safe development of A.I.[13] Hassabis suggested the main safety measure would be an agreement for whichever A.I. research team began to make strides toward an artificial general intelligence to halt their project for a complete solution to the control problem prior to proceeding.[31] Bostrom had pointed out that even if the crucial advances require the resources of a state, such a halt by a lead project might be likely to motivate a lagging country to a catch-up crash program or even physical destruction of the project suspected of being on the verge of success.[24]
Critical assessments
In 1863 Samuel Butler's essay "Darwin among the Machines" predicted the domination of humanity by intelligent machines, but Bostrom's suggestion of deliberate massacre of all humanity is the most extreme of such forecasts to date. One journalist wrote in a review that Bostrom's "nihilistic" speculations indicate he "has been reading too much of the science fiction he professes to dislike".[25] As given in his later book, From Bacteria to Bach and Back, philosopher Daniel Dennett's views remain in contradistinction to those of Bostrom.[32] Dennett modified his views somewhat after reading The Master Algorithm, and now acknowledges that it is "possible in principle" to create "strong A.I." with human-like comprehension and agency, but maintains that the difficulties of any such "strong A.I." project as predicated by Bostrom's "alarming" work would be orders of magnitude greater than those raising concerns have realized, and at least 50 years away.[33] Dennett thinks the only relevant danger from A.I. systems is falling into anthropomorphism instead of challenging or developing human users' powers of comprehension.[34] Since a 2014 book in which he expressed the opinion that artificial intelligence developments would never challenge humans' supremacy, environmentalist James Lovelock has moved far closer to Bostrom's position, and in 2018 Lovelock said that he thought the overthrow of humanity will happen within the foreseeable future.[35][36]
Anthropic reasoning
Bostrom has published numerous articles on anthropic reasoning, as well as the book Anthropic Bias: Observation Selection Effects in Science and Philosophy. In the book, he criticizes previous formulations of the anthropic principle, including those of Brandon Carter, John Leslie, John Barrow, and Frank Tipler.[37]
Bostrom believes that the mishandling of indexical information is a common flaw in many areas of inquiry (including cosmology, philosophy, evolution theory, game theory, and quantum physics). He argues that an anthropic theory is needed to deal with these. He introduces the Self-Sampling Assumption (SSA) and the Self-Indication Assumption (SIA), shows how they lead to different conclusions in a number of cases, and points out that each is affected by paradoxes or counterintuitive implications in certain thought experiments. He suggests that a way forward may involve extending SSA into the Strong Self-Sampling Assumption (SSSA), which replaces "observers" in the SSA definition with "observer-moments".
In later work, he has described the phenomenon of anthropic shadow, an observation selection effect that prevents observers from observing certain kinds of catastrophes in their recent geological and evolutionary past.[38] Catastrophe types that lie in the anthropic shadow are likely to be underestimated unless statistical corrections are made.
Simulation argument
Bostrom's simulation argument posits that at least one of the following statements is very likely to be true:[39][40]
- The fraction of human-level civilizations that reach a posthuman stage is very close to zero;
- The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero;
- The fraction of all people with our kind of experiences that are living in a simulation is very close to one.
Ethics of human enhancement
Bostrom is favorable towards "human enhancement", or "self-improvement and human perfectibility through the ethical application of science",[41][42] as well as a critic of bio-conservative views.[43]
In 1998, Bostrom co-founded (with David Pearce) the World Transhumanist Association[41] (which has since changed its name to Humanity+). In 2004, he co-founded (with James Hughes) the Institute for Ethics and Emerging Technologies, although he is no longer involved in either of these organisations. Bostrom was named in Foreign Policy's 2009 list of top global thinkers "for accepting no limits on human potential."[44]
In 2005 Bostrom published the short story "The Fable of the Dragon-Tyrant" in the Journal of Medical Ethics.[45] A shorter version was published in 2012 in Philosophy Now.[46] The fable personifies death as a dragon that demands a tribute of thousands of people every day. The story explores how status quo bias and learned helplessness can prevent people from taking action to defeat aging even when the means to do so are at their disposal. YouTuber CGP Grey created an animated version of the story which has garnered over eight million views as of 2020.
With philosopher Toby Ord, he proposed the reversal test in 2006. Given humans' irrational status quo bias, how can one distinguish between valid criticisms of proposed changes in a human trait and criticisms merely motivated by resistance to change? The reversal test attempts to do this by asking whether it would be a good thing if the trait was altered in the opposite direction.[47]
Technology strategy
He has suggested that technology policy aimed at reducing existential risk should seek to influence the order in which various technological capabilities are attained, proposing the principle of differential technological development. This principle states that we ought to retard the development of dangerous technologies, particularly ones that raise the level of existential risk, and accelerate the development of beneficial technologies, particularly those that protect against the existential risks posed by nature or by other technologies.[48]
Bostrom's theory of the Unilateralist's Curse[49] has been cited as a reason for the scientific community to avoid controversial dangerous research such as reanimating pathogens.[50]
Public engagement
Bostrom has provided policy advice and consulted for an extensive range of governments and organizations. He gave evidence to the House of Lords, Select Committee on Digital Skills.[51] He is an advisory board member for the Machine Intelligence Research Institute,[52] Future of Life Institute,[53] and an external advisor for the Cambridge Centre for the Study of Existential Risk.[54][55]
In response to Bostrom's writing on artificial intelligence, Oren Etzioni wrote in an MIT Review article, "predictions that superintelligence is on the foreseeable horizon are not supported by the available data."[56] Professors Allan Dafoe and Stuart Russell wrote a response contesting both Etzioni's survey methodology and Etzioni's conclusions.[57]
Prospect Magazine listed Bostrom in their 2014 list of the World's Top Thinkers.[58][59] Bostrom has been called the "father" of Longtermism.[60][61]
Racist comments
In 2023 Bostrom issued an apology for an email that he had written in 1996 where he stated that he thought "Blacks are more stupid than whites", and used the word "nigger."[62] The apology, posted to his website,[63][64][65] stated that "the invocation of a racial slur was repulsive" and that he "completely repudiate(s) this disgusting email". In the apology, he stated that he was never an expert on the issue of race and intelligence, and that he does not support eugenics "as the term is commonly understood", highlighting that "some of the most horrific atrocities of the last century were carried out under the banner of eugenic justifications and racist rationalizations."[66]
Discussing the matter in Daily Nous, philosopher Justin Weinberg wrote, "Philosophers especially are likely to read this as an unsatisfactory apology," and that it seems "even the Nick Bostrom of 2023 does not have a good understanding of racism or communication norms."[64]
Oxford University has launched an investigation into the matter, stating that they condemn "in the strongest terms possible the views this particular academic expressed in his communications."[67][65]
Bibliography
Books
- 2002 – Anthropic Bias: Observation Selection Effects in Science and Philosophy, ISBN 0-415-93858-9
- 2008 – Global Catastrophic Risks, edited by Bostrom and Milan M. Ćirković, ISBN 978-0-19-857050-9
- 2009 – Human Enhancement, edited by Bostrom and Julian Savulescu, ISBN 0-19-929972-2
- 2014 – Superintelligence: Paths, Dangers, Strategies, ISBN 978-0-19-967811-2
Journal articles (selected)
- Bostrom, Nick (1998). "How Long Before Superintelligence?". Journal of Future Studies. 2.
- — (1999). "The Doomsday Argument is Alive and Kicking". Mind. 108 (431): 539–550. doi:10.1093/mind/108.431.539. JSTOR 2660095.
- — (January 2000). "Observer-relative chances in anthropic reasoning?". Erkenntnis. 52 (1): 93–108. doi:10.1023/A:1005551304409. JSTOR 20012969. S2CID 140474848.
- — (June 2001). "The Doomsday Argument, Adam & Eve, UN++, and Quantum Joe". Synthese. 127 (3): 359–387. doi:10.1023/A:1010350925053. JSTOR 20141195. S2CID 36078450.
- — (October 2001). "The Meta-Newcomb Problem". Analysis. 61 (4): 309–310. doi:10.1111/1467-8284.00310. JSTOR 3329010.
- — (March 2002). "Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards". Journal of Evolution and Technology. 9 (1).
- — (December 2002). "Self-Locating Belief in Big Worlds: Cosmology's Missing Link to Observation". Journal of Philosophy. 99 (12): 607–623. doi:10.2307/3655771. JSTOR 3655771.
- — (April 2003). "Are You Living in a Computer Simulation?" (PDF). Philosophical Quarterly. 53 (211): 243–255. doi:10.1111/1467-9213.00309. JSTOR 3542867.
- — (2003). "The Mysteries of Self-Locating Belief and Anthropic Reasoning" (PDF). Harvard Review of Philosophy. 11 (Spring): 59–74. doi:10.5840/harvardreview20031114.
- — (November 2003). "Astronomical Waste: The Opportunity Cost of Delayed Technological Development". Utilitas. 15 (3): 308–314. CiteSeerX 10.1.1.429.2849. doi:10.1017/S0953820800004076. S2CID 15860897.
- — (May 2005). "The Fable of the Dragon-Tyrant". J Med Ethics. 31 (5): 273–277. doi:10.1136/jme.2004.009035. JSTOR 27719395. PMC 1734155. PMID 15863685.
- — (June 2005). "In Defense of Posthuman Dignity". Bioethics. 19 (3): 202–214. doi:10.1111/j.1467-8519.2005.00437.x. PMID 16167401.
- with Tegmark, Max (December 2005). "How Unlikely is a Doomsday Catastrophe?". Nature. 438 (7069): 754. arXiv:astro-ph/0512204. Bibcode:2005Natur.438..754T. doi:10.1038/438754a. PMID 16341005. S2CID 4390013.
- — (2006). "What is a Singleton?". Linguistic and Philosophical Investigations. 5 (2): 48–54.
- — (May 2006). "Quantity of Experience: Brain-Duplication and Degrees of Consciousness" (PDF). Minds and Machines. 16 (2): 185–200. doi:10.1007/s11023-006-9036-0. S2CID 14816099.
- with Ord, Toby (July 2006). "The Reversal Test: Eliminating Status Quo Bias in Applied Ethics" (PDF). Ethics. 116 (4): 656–680. doi:10.1086/505233. PMID 17039628. S2CID 12861892.
- with Sandberg, Anders (December 2006). "Converging Cognitive Enhancements" (PDF). Annals of the New York Academy of Sciences. 1093 (1): 201–207. Bibcode:2006NYASA1093..201S. CiteSeerX 10.1.1.328.3853. doi:10.1196/annals.1382.015. PMID 17312260. S2CID 10135931.
- — (July 2007). "Sleeping beauty and self-location: A hybrid model" (PDF). Synthese. 157 (1): 59–78. doi:10.1007/s11229-006-9010-7. JSTOR 27653543. S2CID 12215640.
- — (January 2008). "Drugs can be used to treat more than disease" (PDF). Nature. 452 (7178): 520. Bibcode:2008Natur.451..520B. doi:10.1038/451520b. PMID 18235476. S2CID 4426990.
- — (2008). "The doomsday argument". Think. 6 (17–18): 23–28. doi:10.1017/S1477175600002943. S2CID 171035249.
- — (2008). "Where Are They? Why I hope the search for extraterrestrial life finds nothing" (PDF). Technology Review (May/June): 72–77.
- with Sandberg, Anders (September 2009). "Cognitive Enhancement: Methods, Ethics, Regulatory Challenges" (PDF). Science and Engineering Ethics. 15 (3): 311–341. CiteSeerX 10.1.1.143.4686. doi:10.1007/s11948-009-9142-5. PMID 19543814. S2CID 6846531.
- — (2009). "Pascal's Mugging" (PDF). Analysis. 69 (3): 443–445. doi:10.1093/analys/anp062. JSTOR 40607655.
- with Ćirković, Milan; Sandberg, Anders (2010). "Anthropic Shadow: Observation Selection Effects and Human Extinction Risks" (PDF). Risk Analysis. 30 (10): 1495–1506. doi:10.1111/j.1539-6924.2010.01460.x. PMID 20626690. S2CID 6485564.
- — (2011). "Information Hazards: A Typology of Potential Harms from Knowledge" (PDF). Review of Contemporary Philosophy. 10: 44–79. ProQuest 920893069.
- Bostrom, Nick (2011). "THE ETHICS OF ARTIFICIAL INTELLIGENCE" (PDF). Cambridge Handbook of Artificial Intelligence. Archived from the original (PDF) on 4 March 2016. Retrieved 13 February 2017.
- Bostrom, Nick (2011). "Infinite Ethics" (PDF). Analysis and Metaphysics. 10: 9–59.
- — (May 2012). "The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents" (PDF). Minds and Machines. 22 (2): 71–84. doi:10.1007/s11023-012-9281-3. S2CID 7445963.
- with Shulman, Carl (2012). "How Hard is AI? Evolutionary Arguments and Selection Effects" (PDF). Journal of Consciousness Studies. 19 (7–8): 103–130.
- with Armstrong, Stuart; Sandberg, Anders (November 2012). "Thinking Inside the Box: Controlling and Using Oracle AI" (PDF). Minds and Machines. 22 (4): 299–324. CiteSeerX 10.1.1.396.799. doi:10.1007/s11023-012-9282-2. S2CID 9464769.
- — (February 2013). "Existential Risk Reduction as Global Priority". Global Policy. 4 (3): 15–31. doi:10.1111/1758-5899.12002.
- with Shulman, Carl (February 2014). "Embryo Selection for Cognitive Enhancement: Curiosity or Game-changer?" (PDF). Global Policy. 5 (1): 85–92. CiteSeerX 10.1.1.428.8837. doi:10.1111/1758-5899.12123.
- with Muehlhauser, Luke (2014). "Why we need friendly AI" (PDF). Think. 13 (36): 41–47. doi:10.1017/S1477175613000316. S2CID 143657841.
- Bostrom, Nick (September 2019). "The Vulnerable World Hypothesis". Global Policy. 10 (4): 455–476. doi:10.1111/1758-5899.12718.
See also
References
- ^ a b c d e Khatchadourian, Raffi (23 November 2015). "The Doomsday Invention". The New Yorker. Vol. XCI, no. 37. pp. 64–79. ISSN 0028-792X.
- ^ "Infinite Ethics" (PDF). nickbostrom.com. Retrieved 21 February 2019.
- ^ "nickbostrom.com". Nickbostrom.com. Archived from the original on 30 August 2018. Retrieved 16 October 2014.
- ^ "Professor Nick Bostrom : People". Oxford Martin School. Archived from the original on 15 September 2018. Retrieved 16 October 2014.
- ^ "Future of Humanity Institute – University of Oxford". Fhi.ox.ac.uk. Retrieved 16 October 2014.
- ^ Frankel, Rebecca. "The FP Top 100 Global Thinkers". Foreign Policy. Retrieved 5 September 2015.
- ^ "Nick Bostrom: For sounding the alarm on our future computer overlords". Foreign Policy. Retrieved 1 December 2015.
- ^ a b c Thornhill, John (14 July 2016). "Artificial intelligence: can we control it?". Financial Times. Archived from the original on 10 December 2022. Retrieved 10 August 2016. (subscription required)
- ^ a b "Nick Bostrom on artificial intelligence". Oxford University Press. 8 September 2014. Retrieved 4 March 2015.
- ^ Times, The New York (8 September 2014). "Best Selling Science Books". The New York Times. Retrieved 19 February 2015.
- ^ "Bill Gates Is Worried About the Rise of the Machines". The Fiscal Times. Retrieved 19 February 2015.
- ^ Bratton, Benjamin H. (23 February 2015). "Outing A.I.: Beyond the Turing Test". The New York Times. Retrieved 4 March 2015.
- ^ a b Shead, Sam (6 February 2017). "The CEO of Google DeepMind is worried that tech giants won't work together at the time of the intelligence explosion". Business Insider. Retrieved 21 February 2019.
- ^ Kurzweil, Ray (2012). How to create a mind the secret of human thought revealed. New York: Viking. ISBN 9781101601105.
- ^ Bostrom, Nick. "CV" (PDF).
{{cite web}}
: CS1 maint: url-status (link) - ^ Bostrom, Nick (2000). Observational selection effects and probability (PhD). London School of Economics and Political Science. Retrieved 25 June 2021.
- ^ "Nick Bostrom : CV" (PDF). Nickbostrom.com. Retrieved 16 October 2014.
- ^ a b Bostrom, Nick (March 2002). "Existential Risks". Journal of Evolution and Technology. 9.
- ^ a b Andersen, Ross. "Omens". Aeon Media Ltd. Retrieved 5 September 2015.
- ^ Tegmark, Max; Bostrom, Nick (2005). "Astrophysics: is a doomsday catastrophe likely?" (PDF). Nature. 438 (7069): 754. Bibcode:2005Natur.438..754T. doi:10.1038/438754a. PMID 16341005. S2CID 4390013. Archived from the original (PDF) on 3 July 2011.
- ^ Bostrom, Nick (May–June 2008). "Where are they? Why I Hope the Search for Extraterrestrial Life Finds Nothing" (PDF). MIT Technology Review: 72–77.
- ^ Overbye, Dennis (3 August 2015). "The Flip Side of Optimism About Life on Other Planets". The New York Times. Retrieved 29 October 2015.
- ^ Thorn, Paul D. (1 January 2015). "Nick Bostrom: Superintelligence: Paths, Dangers, Strategies". Minds and Machines. 25 (3): 285–289. doi:10.1007/s11023-015-9377-7. S2CID 18174037. Retrieved 17 March 2017.
- ^ a b c d e f g h i j k l m n Bostrom, Nick (2016). Superintelligence. Oxford University Press. pp. 98–111. ISBN 978-0-19-873983-8. OCLC 943145542.
- ^ a b Observer, Tim Adams, Sunday 12 June 2016 Artificial intelligence: ‘We’re like children playing with a bomb’
- ^ Bostrom, Nick. (2016). "Chapter 10: Oracles, genies, sovereigns, tools". Superintelligence. Oxford University Press. ISBN 978-0-19-873983-8. OCLC 943145542.
- ^ Yudkowsky, Eliezer (2004). Coherent Extrapolated Volition (PDF). San Francisco: The Singularity Institute.
- ^ Loos, Robert (23 January 2015). "Artificial Intelligence and The Future of Life". Robotics Today. Retrieved 17 March 2017.
- ^ "The Future of Life Institute Open Letter". The Future of Life Institute. Retrieved 4 March 2015.
- ^ Guardian February 2016 Interview The superhero of artificial intelligence: can this genius keep it in check?
- ^ Business Insider 26 February 2017 The CEO of Google DeepMind is worried that tech giants won't work together at the time of the intelligence explosion
- ^ Dennett, D. C. (Daniel Clement) (2018). From bacteria to Bach and back : the evolution of minds. Penguin Books. p. 400. ISBN 978-0-14-197804-8. OCLC 1054992782.
- ^ Dennett, D. C. (Daniel Clement) (2018). From bacteria to Bach and back : the evolution of minds. Penguin Books. pp. 399–400. ISBN 978-0-14-197804-8. OCLC 1054992782.
- ^ Dennett, D. C. (Daniel Clement) (2018). From bacteria to Bach and back : the evolution of minds. Penguin Books. pp. 399–403. ISBN 978-0-14-197804-8. OCLC 1054992782.
- ^ Guardian, Caspar Henderson, Thu 17 Jul 2014, Superintelligence by Nick Bostrom and A Rough Ride to the Future by James Lovelock – review
- ^ "Leading environmental thinker suggests humans might have had their day". The Independent. 8 August 2018. Archived from the original on 19 June 2022. Retrieved 20 March 2020.
- ^ Bostrom, Nick (2002). Anthropic Bias: Observation Selection Effects in Science and Philosophy (PDF). New York: Routledge. pp. 44–58. ISBN 978-0-415-93858-7. Retrieved 22 July 2014.
- ^ "Anthropic Shadow: Observation Selection Effects and Human Extinction Risks" (PDF). Nickbostrom.com. Retrieved 16 October 2014.
- ^ Bostrom, Nick (19 January 2010). "Are You Living in a Computer Simulation?".
- ^ Nesbit, Jeff. "Proof of the Simulation Argument". US News. Retrieved 17 March 2017.
- ^ a b Sutherland, John (9 May 2006). "The ideas interview: Nick Bostrom; John Sutherland meets a transhumanist who wrestles with the ethics of technologically enhanced human beings". The Guardian.
- ^ Bostrom, Nick (2003). "Human Genetic Enhancements: A Transhumanist Perspective" (PDF). Journal of Value Inquiry. 37 (4): 493–506. doi:10.1023/B:INQU.0000019037.67783.d5. PMID 17340768. S2CID 42628954.
- ^ Bostrom, Nick (2005). "In Defence of Posthuman Dignity". Bioethics. 19 (3): 202–214. doi:10.1111/j.1467-8519.2005.00437.x. PMID 16167401.
- ^ "The FP Top 100 Global Thinkers – 73. Nick Bostrom". Foreign Policy. December 2009. Archived from the original on 21 October 2014.
- ^ Bostrom, N. (1 May 2005). "The fable of the dragon tyrant". Journal of Medical Ethics. 31 (5): 273–277. doi:10.1136/jme.2004.009035. ISSN 0306-6800. PMC 1734155. PMID 15863685.
- ^ Bostrom, Nick (12 June 2012). "The Fable of the Dragon-Tyrant". Philosophy Now. 89: 6–9.
- ^ Bostrom, Nick; Ord, Toby (2006). "The reversal test: eliminating status quo bias in applied ethics" (PDF). Ethics. 116 (4): 656–679. doi:10.1086/505233. PMID 17039628. S2CID 12861892.
- ^ Bostrom, Nick (2002). "Existential Risks: Analyzing Human Extinction Scenarios".
{{cite journal}}
: Cite journal requires|journal=
(help) 9 Journal of Evolution and Technology Jetpress Oxford Research Archive - ^ Bostrom, Nick (2013). "The Unilateralist's Curse: The Case for a Principle of Conformity" (PDF). Future of Human Ity Institute.
- ^ Lewis, Gregory (19 February 2018). "Horsepox synthesis: A case of the unilateralist's curse?". Bulletin of the Atomic Scientists. Bulletin of the Atomic Scientists. Retrieved 26 February 2018.
- ^ "Digital Skills Committee – timeline". UK Parliament. Retrieved 17 March 2017.
- ^ "Team – Machine Intelligence Research Institute". Machine Intelligence Research Institute. Retrieved 17 March 2017.
- ^ "Team – Future of Life Institute". Future of Life Institute. Retrieved 17 March 2017.
- ^ "nickbostrom.com". Nickbostrom.com. Archived from the original on 30 August 2018. Retrieved 19 February 2015.
- ^ McBain, Sophie (4 October 2014). "Apocalypse Soon: Meet The Scientists Preparing For the End Times". New Republic. Retrieved 17 March 2017.
- ^ Oren Etzioni (2016). "No, the Experts Don't Think Superintelligent AI is a Threat to Humanity". MIT Review.
- ^ Allan Dafoe and Stuart Russell (2016). "Yes, We Are Worried About the Existential Risk of Artificial Intelligence". MIT Review.
- ^ Kutchinsky, Serena (23 April 2014). "World thinkers 2014: The results". Prospect. Retrieved 19 June 2022.
{{cite web}}
: CS1 maint: url-status (link) - ^ "Professor Nick Bostrom | University of Oxford". ox.ac.uk. Retrieved 23 September 2019.
- ^ "Prominent AI Philosopher and 'Father' of Longtermism Sent Very Racist Email to a 90s Philosophy Listserv". www.vice.com. Retrieved 10 April 2023.
- ^ Torres, Émile P. (20 August 2022). "Understanding "longtermism": Why this suddenly influential philosophy is so toxic". Salon. Retrieved 10 April 2023.
- ^ Ladden-Hall, Dan (12 January 2023). "Top Oxford Philosopher Nick Bostrom Admits Writing 'Disgusting' N-Word Mass Email". The Daily Beast. Retrieved 12 January 2023.
- ^ "Nick Bostrom's personal website".
- ^ a b Weinberg, Justin (13 January 2023). "Why a Philosopher's Racist Email from 26 Years Ago is News Today". Daily Nous.
- ^ a b Woolcock, Nicola (12 January 2023). "Blacks more stupid than whites, wrote Oxford don Nick Bostrom". The Times.
- ^ "Apology for an Old Email" (PDF).
- ^ Bilyard, Dylan (15 January 2023). "Investigation Launched into Oxford Don's Racist Email". The Oxford Blue. Retrieved 23 January 2023.
External links
- Official website
- Radio Bostrom. Audio narrations of Bostrom's academic papers.
- Anthropic Principle. Bostrom's website about the anthropic principle and the Doomsday argument.
- Simulation Argument. Bostrom's website about the simulation argument.
- Existential Risk. Bostrom's website about existential risk.
- Nick Bostrom at IMDb
- Nick Bostrom at TED
- Nick Bostrom interviewed on the TV show Triangulation on the TWiT.tv network
- Nick Bostrom
- 1973 births
- Living people
- 21st-century Swedish philosophers
- Alumni of King's College London
- Alumni of the London School of Economics
- Artificial intelligence ethicists
- Bayesian statisticians
- Consequentialists
- Cryonicists
- Epistemologists
- Fellows of St Cross College, Oxford
- Futurologists
- People associated with effective altruism
- People from Helsingborg
- Philosophers of technology
- Swedish computer scientists
- Swedish ethicists
- Swedish expatriates in the United Kingdom
- Swedish philosophers
- Swedish roboticists
- Swedish transhumanists
- University of Gothenburg alumni