Global catastrophic risk

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Artist's impression of a major asteroid impact. An asteroid may have caused the extinction of the non-avian dinosaurs.[1]

A global catastrophic risk is a hypothetical future event that could damage human well-being on a global scale,[2] even endangering or destroying modern civilization.[3] An event that could cause human extinction or permanently and drastically curtail humanity's potential is known as an existential risk.[4]

Over the last two decades, a number of academic and non-profit organizations have been established to research global catastrophic and existential risks and formulate potential mitigation measures.[5][6][7][8]

Definition and classification[edit]

Scope/intensity grid from Bostrom's paper "Existential Risk Prevention as Global Priority"[9]

Defining global catastrophic risks[edit]

The term global catastrophic risk "lacks a sharp definition", and generally refers (loosely) to a risk that could inflict "serious damage to human well-being on a global scale".[10]

Humanity has suffered large catastrophes before. Some of these have caused serious damage, but were only local in scope—e.g. the Black Death may have resulted in the deaths of a third of Europe's population,[11] 10% of the global population at the time.[12] Some were global, but were not as severe—e.g. the 1918 influenza pandemic killed an estimated 3-6% of the world's population.[13] Most global catastrophic risks would not be so intense as to kill the majority of life on earth, but even if one did, the ecosystem and humanity would eventually recover (in contrast to existential risks).

Similarly, in Catastrophe: Risk and Response, Richard Posner singles out and groups together events that bring about "utter overthrow or ruin" on a global, rather than a "local or regional", scale. Posner highlights such events as worthy of special attention on cost–benefit grounds because they could directly or indirectly jeopardize the survival of the human race as a whole.[14]

Defining existential risks[edit]

Existential risks are defined as "risks that threaten the destruction of humanity's long-term potential."[15] The instantiation of an existential risk (an existential catastrophe[16]) would either cause outright human extinction or irreversibly lock in a drastically inferior state of affairs.[9][17] Existential risks are a sub-class of global catastrophic risks, where the damage is not only global, but also terminal and permanent (preventing recovery and thus impacting both the current and all subsequent generations).[9]

Non-extinction risks[edit]

While extinction is the most obvious way in which humanity's long-term potential could be destroyed, there are others, including unrecoverable collapse and unrecoverable dystopia.[18] A disaster severe enough to cause the permanent, irreversible collapse of human civilisation would constitute an existential catastrophe, even if it fell short of extinction.[18] Similarly, if humanity fell under a totalitarian regime, and there were no chance of recovery then such a dystopia would also be an existential catastrophe.[19] Bryan Caplan writes that "perhaps an eternity of totalitarianism would be worse than extinction".[19] (George Orwell's novel Nineteen Eighty-Four suggests[20] an example.[21]) A dystopian scenario shares the key features of extinction and unrecoverable collapse of civilisation—before the catastrophe, humanity faced a vast range of bright futures to choose from; after the catastrophe, humanity is locked forever in a terrible state.[18]

Potential sources of risk[edit]

Potential global catastrophic risks include anthropogenic risks, caused by humans (technology, governance, climate change), and non-anthropogenic or natural risks.[3] Technological risks include the creation of destructive artificial intelligence, biotechnology or nanotechnology. Insufficient or malign global governance creates risks in the social and political domain, such as a global war, including nuclear holocaust, bioterrorism using genetically modified organisms, cyberterrorism destroying critical infrastructure like the electrical grid; or the failure to manage a natural pandemic. Problems and risks in the domain of earth system governance include global warming, environmental degradation, including extinction of species, famine as a result of non-equitable resource distribution, human overpopulation, crop failures and non-sustainable agriculture.

Examples of non-anthropogenic risks are an asteroid impact event, a supervolcanic eruption, a lethal gamma-ray burst, a geomagnetic storm destroying electronic equipment, natural long-term climate change, hostile extraterrestrial life, or the predictable Sun transforming into a red giant star engulfing the Earth.

Likelihood[edit]

Natural vs. anthropogenic[edit]

Experts generally agree that anthropogenic existential risks are (much) more likely than natural risks.[18][22][23][24][25] A key difference between these risk types is that empirical evidence can place an upper bound on the level of natural risk.[24] Humanity has existed for at least 200,000 years, over which it has been subject to a roughly constant level of natural risk. If the natural risk were high, then it would be highly unlikely that humanity would have survived as long as it has. Based on a formalization of this argument, researchers have concluded that we can be confident that natural risk is lower than 1 in 14,000 (and likely "less than one in 87,000") per year.[24]

Another empirical method to study the likelihood of certain natural risks is to investigate the geological record.[18] For example, a comet or asteroid impact event sufficient in scale to cause an impact winter that would cause human extinction before the year 2100 has been estimated at one-in-a-million.[26][27] Moreover, large supervolcano eruptions may cause a volcanic winter that could endanger the survival of humanity.[28] The geological record suggests that supervolcanic eruptions are estimated to occur on average about every 50,000 years, though most such eruptions would not reach the scale required to cause human extinction.[28] Famously, the supervolcano Mt. Toba may have almost wiped out humanity at the time of its last eruption (though this is contentious).[28][29]

Since anthropogenic risk is a relatively recent phenomenon, humanity's track record of survival cannot provide similar assurances.[24] Humanity has only survived 75 years since the creation of nuclear weapons, and for future technologies, there is no track record at all. This has led thinkers like Carl Sagan to conclude that humanity is currently in a ‘time of perils’[30]—a uniquely dangerous period in human history, where it is subject to unprecedented levels of risk, beginning from when we first started posing risks to ourselves through our actions.[18][31]

Risk estimates[edit]

Given the limitations of ordinary observation and modeling, expert elicitation is frequently used instead to obtain probability estimates.[32] In 2008, an informal survey of experts at a conference hosted by the Future of Humanity Institute estimated a 19% risk of human extinction by the year 2100, though given the survey's limitations these results should be taken "with a grain of salt".[23]

Risk Estimated probability
for human extinction
before 2100
Overall probability
19%
Molecular nanotechnology weapons
5%
Superintelligent AI
5%
All wars (including civil wars)
4%
Engineered pandemic
2%
Nuclear war
1%
Nanotechnology accident
0.5%
Natural pandemic
0.05%
Nuclear terrorism
0.03%

Table source: Future of Humanity Institute, 2008.[23]

There have been a number of other estimates of existential risk, extinction risk, or a global collapse of civilization:

  • In 1996, John Leslie estimated a 30% risk over the next five centuries (equivalent to around 9% per century, on average).[33]
  • In 2002, Nick Bostrom gave the following estimate of existential risk over the long term: ‘My subjective opinion is that setting this probability lower than 25% would be misguided, and the best estimate may be considerably higher.’[34]
  • In 2003, Martin Rees estimated a 50% chance of collapse of civilisation in the twenty-first century.[35]
  • The Global Challenges Foundation's 2016 annual report estimates an annual probability of human extinction of at least 0.05% per year.[36]
  • A 2016 survey of AI experts found a median estimate of 5% that human-level AI would cause an outcome that was "extremely bad (e.g. human extinction)".[37]
  • In 2020, Toby Ord estimates existential risk in the next century at ‘1 in 6’ in his book The Precipice: Existential Risk and the Future of Humanity.[18][38]
  • Metaculus users currently estimate a 3% probability of humanity going extinct before 2100.[39]

Methodological challenges[edit]

Research into the nature and mitigation of global catastrophic risks and existential risks is subject to a unique set of challenges and consequently not easily subject to the usual standards of scientific rigour.[18] For instance, it is neither feasible nor ethical to study these risks experimentally. Carl Sagan expressed this with regards to nuclear war: “Understanding the long-term consequences of nuclear war is not a problem amenable to experimental verification”.[40] Moreover, many catastrophic risks change rapidly as technology advances and background conditions (such as international relations) change. Another challenge is the general difficulty of accurately predicting the future over long timescales, especially for athropogenic risks which depend on complex human political, economic and social systems.[18] In addition to known and tangible risks, unforeseeable black swan extinction events may occur, presenting an additional methodological problem.[18][41]

Lack of historical precedent[edit]

Humanity has never suffered an existential catastrophe and if one were to occur, it would necessarily be unprecedented.[18] Therefore, existential risks pose unique challenges to prediction, even more than other long-term events, because of observation selection effects.[42] Unlike with most events, the failure of a complete extinction event to occur in the past is not evidence against their likelihood in the future, because every world that has experienced such an extinction event has no observers, so regardless of their frequency, no civilization observes existential risks in its history.[42] These anthropic issues may partly be avoided by looking at evidence that does not have such selection effects, such as asteroid impact craters on the Moon, or directly evaluating the likely impact of new technology.[9]

To understand the dynamics of an unprecedented, unrecoverable global civilizational collapse (a type of existential risk), it may be instructive to study the various local civilizational collapses that have occurred throughout human history.[43] For instance, civilizations such as the Roman Empire have ended in a loss of centralized governance and a major civilization-wide loss of infrastructure and advanced technology. However, these examples demonstrate that societies appear to be fairly resilient to catastrophe; for example, Medieval Europe survived the Black Death without suffering anything resembling a civilization collapse despite losing 25 to 50 percent of its population.[44]

Incentives and coordination[edit]

There are economic reasons that can explain why so little effort is going into existential risk reduction. It is a global public good, so we should expect it to be undersupplied by markets.[9] Even if a large nation invests in risk mitigation measures, that nation will enjoy only a small fraction of the benefit of doing so. Furthermore, existential risk reduction is an intergenerational global public good, since most of the benefits of existential risk reduction would be enjoyed by future generations, and though these future people would in theory perhaps be willing to pay substantial sums for existential risk reduction, no mechanism for such a transaction exists.[9]

Cognitive biases[edit]

Numerous cognitive biases can influence people's judgment of the importance of existential risks, including scope insensitivity, hyperbolic discounting, availability heuristic, the conjunction fallacy, the affect heuristic, and the overconfidence effect.[45]

Scope insensitivity influences how bad people consider the extinction of the human race to be. For example, when people are motivated to donate money to altruistic causes, the quantity they are willing to give does not increase linearly with the magnitude of the issue: people are roughly as willing to prevent the deaths of 200,000 or 2,000 birds.[46] Similarly, people are often more concerned about threats to individuals than to larger groups.[45]

Moral importance of existential risk[edit]

In one of the earliest discussions of ethics of human extinction, Derek Parfit offers the following thought experiment:[47]

I believe that if we destroy mankind, as we now can, this outcome will be much worse than most people think. Compare three outcomes:

(1) Peace.
(2) A nuclear war that kills 99% of the world's existing population.
(3) A nuclear war that kills 100%.

(2) would be worse than (1), and (3) would be worse than (2). Which is the greater of these two differences? Most people believe that the greater difference is between (1) and (2). I believe that the difference between (2) and (3) is very much greater.

— Derek Parfit

The scale of what is lost in an existential catastrophe is determined by humanity's long-term potential—what humanity could expect to achieve if it survived.[18] From a utilitarian perspective, the value of protecting humanity is the product of its duration (how long humanity survives), its size (how many humans there are over time), and its quality (on average, how good is life for future people).[18]: 273 [48] On average, species survive for around a million years before going extinct. Parfit points out that the Earth will remain habitable for around a billion years.[47] And these might be lower bounds on our potential: if humanity is able to expand beyond Earth, it could greatly increase the human population and survive for trillions of years.[17][18]: 21  The size of the foregone potential that would be lost, were humanity to go extinct, is very large. Therefore, reducing existential risk by even a small amount would have a very significant moral value.[9][49]

Some economists and philosophers have defended views, including exponential discounting and person-affecting views of population ethics, on which future people do not matter (or matter much less), morally speaking.[50] While these views are controversial,[26][51][52] even they would agree that an existential catastrophe would be among the worst things imaginable. It would cut short the lives of eight billion presently existing people, destroying all of what makes their lives valuable, and most likely subjecting many of them to profound suffering. So even setting aside the value of future generations, there may be strong reasons to reduce existential risk, grounded in concern for presently existing people.[53]

Beyond utilitarianism, other moral perspectives lend support to the importance of reducing existential risk. An existential catastrophe would destroy more than just humanity—it would destroy all cultural artifacts, languages, and traditions, and many of the things we value.[18][40] So moral viewpoints on which we have duties to protect and cherish things of value would see this as a huge loss that should be avoided.[18] One can also consider reasons grounded in duties to past generations. For instance, Edmund Burke writes of a "partnership ... between those who are living, those who are dead, and those who are to be born".[54] If one takes seriously the debt humanity owes to past generations, Ord argues the best way of repaying it might be to 'pay it forward', and ensure that humanity's inheritance is passed down to future generations.[18]: 49–51 

There are several economists who have discussed the importance of global catastrophic risks. For example, Martin Weitzman argues that most of the expected economic damage from climate change may come from the small chance that warming greatly exceeds the mid-range expectations, resulting in catastrophic damage.[55] Richard Posner has argued that humanity is doing far too little, in general, about small, hard-to-estimate risks of large-scale catastrophes.[56]

Proposed mitigation[edit]

Defense in depth is a useful framework for categorizing risk mitigation measures into three layers of defense:[57]

  1. Prevention: Reducing the probability of a catastrophe occurring in the first place. Example: Measures to prevent outbreaks of new highly infectious diseases.
  2. Response: Preventing the scaling of a catastrophe to the global level. Example: Measures to prevent escalation of a small-scale nuclear exchange into an all-out nuclear war.
  3. Resilience: Increasing humanity's resilience (against extinction) when faced with global catastrophes. Example: Measures to increase food security during a nuclear winter.

Human extinction is most likely when all three defenses are weak, that is, "by risks we are unlikely to prevent, unlikely to successfully respond to, and unlikely to be resilient against".[57]

The unprecedented nature of existential risks poses a special challenge in designing risk mitigation measures since humanity will not be able to learn from a track record of previous events.[18]

Planetary management and respecting planetary boundaries have been proposed as approaches to preventing ecological catastrophes. Within the scope of these approaches, the field of geoengineering encompasses the deliberate large-scale engineering and manipulation of the planetary environment to combat or counteract anthropogenic changes in atmospheric chemistry. Space colonization is a proposed alternative to improve the odds of surviving an extinction scenario.[58] Solutions of this scope may require megascale engineering. Food storage has been proposed globally, but the monetary cost would be high. Furthermore, it would likely contribute to the current millions of deaths per year due to malnutrition.[59]

Some survivalists stock survival retreats with multiple-year food supplies.

The Svalbard Global Seed Vault is buried 400 feet (120 m) inside a mountain on an island in the Arctic. It is designed to hold 2.5 billion seeds from more than 100 countries as a precaution to preserve the world's crops. The surrounding rock is −6 °C (21 °F) (as of 2015) but the vault is kept at −18 °C (0 °F) by refrigerators powered by locally sourced coal.[60][61]

More speculatively, if society continues to function and if the biosphere remains habitable, calorie needs for the present human population might in theory be met during an extended absence of sunlight, given sufficient advance planning. Conjectured solutions include growing mushrooms on the dead plant biomass left in the wake of the catastrophe, converting cellulose to sugar, or feeding natural gas to methane-digesting bacteria.[62][63]

Global catastrophic risks and global governance[edit]

Insufficient global governance creates risks in the social and political domain, but the governance mechanisms develop more slowly than technological and social change. There are concerns from governments, the private sector, as well as the general public about the lack of governance mechanisms to efficiently deal with risks, negotiate and adjudicate between diverse and conflicting interests. This is further underlined by an understanding of the interconnectedness of global systemic risks.[64] In absence or anticipation of global governance, national governments can act individually to better understand, mitigate and prepare for global catastrophes.[65]

Climate emergency plans[edit]

In 2018, the Club of Rome called for greater climate change action and published its Climate Emergency Plan, which proposes ten action points to limit global average temperature increase to 1.5 degrees Celsius.[66] Further, in 2019, the Club published the more comprehensive Planetary Emergency Plan.[67]

Moving the Earth[edit]

In a few billion years, the Sun will expand into a red giant, swallowing the Earth. This can be avoided by moving the Earth farther out from the Sun, keeping the temperature roughly constant. That can be accomplished by tweaking the orbits of comets and asteroids so they pass close to the Earth in such a way that they add energy to the Earth's orbit.[68] Since the Sun's expansion is slow, roughly one such encounter every 6,000 years would suffice.

Organizations[edit]

The Bulletin of the Atomic Scientists (est. 1945) is one of the oldest global risk organizations, founded after the public became alarmed by the potential of atomic warfare in the aftermath of WWII. It studies risks associated with nuclear war and energy and famously maintains the Doomsday Clock established in 1947. The Foresight Institute (est. 1986) examines the risks of nanotechnology and its benefits. It was one of the earliest organizations to study the unintended consequences of otherwise harmless technology gone haywire at a global scale. It was founded by K. Eric Drexler who postulated "grey goo".[69][70]

Beginning after 2000, a growing number of scientists, philosophers and tech billionaires created organizations devoted to studying global risks both inside and outside of academia.[71]

Independent non-governmental organizations (NGOs) include the Machine Intelligence Research Institute (est. 2000), which aims to reduce the risk of a catastrophe caused by artificial intelligence,[72] with donors including Peter Thiel and Jed McCaleb.[73] The Nuclear Threat Initiative (est. 2001) seeks to reduce global threats from nuclear, biological and chemical threats, and containment of damage after an event.[8] It maintains a nuclear material security index.[74] The Lifeboat Foundation (est. 2009) funds research into preventing a technological catastrophe.[75] Most of the research money funds projects at universities.[76] The Global Catastrophic Risk Institute (est. 2011) is a think tank for catastrophic risk. It is funded by the NGO Social and Environmental Entrepreneurs. The Global Challenges Foundation (est. 2012), based in Stockholm and founded by Laszlo Szombatfalvy, releases a yearly report on the state of global risks.[36][77] The Future of Life Institute (est. 2014) aims to support research and initiatives for safeguarding life considering new technologies and challenges facing humanity.[7] Elon Musk is one of its biggest donors.[78] The Center on Long-Term risk (est. 2016), formerly known as the Foundational Research Institute, is a British organization focused on reducing risks of astronomical suffering (s-risks) from emerging technologies.[79]

University-based organizations include the Future of Humanity Institute (est. 2005) which researches the questions of humanity's long-term future, particularly existential risk.[5] It was founded by Nick Bostrom and is based at Oxford University.[5] The Centre for the Study of Existential Risk (est. 2012) is a Cambridge University-based organization which studies four major technological risks: artificial intelligence, biotechnology, global warming and warfare.[6] All are man-made risks, as Huw Price explained to the AFP news agency, "It seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology". He added that when this happens "we're no longer the smartest things around," and will risk being at the mercy of "machines that are not malicious, but machines whose interests don't include us."[80] Stephen Hawking was an acting adviser. The Millennium Alliance for Humanity and the Biosphere is a Stanford University-based organization focusing on many issues related to global catastrophe by bringing together members of academic in the humanities.[81][82] It was founded by Paul Ehrlich among others.[83] Stanford University also has the Center for International Security and Cooperation focusing on political cooperation to reduce global catastrophic risk.[84] The Center for Security and Emerging Technology was established in January 2019 at Georgetown's Walsh School of Foreign Service and will focus on policy research of emerging technologies with an initial emphasis on artificial intelligence.[85] They received a grant of 55M USD from Good Ventures as suggested by Open Philanthropy.[85]

Other risk assessment groups are based in or are part of governmental organizations. The World Health Organization (WHO) includes a division called the Global Alert and Response (GAR) which monitors and responds to global epidemic crisis.[86] GAR helps member states with training and coordination of response to epidemics.[87] The United States Agency for International Development (USAID) has its Emerging Pandemic Threats Program which aims to prevent and contain naturally generated pandemics at their source.[88] The Lawrence Livermore National Laboratory has a division called the Global Security Principal Directorate which researches on behalf of the government issues such as bio-security and counter-terrorism.[89]

History[edit]

Early history of thinking about human extinction[edit]

Before the 18th and 19th centuries, the possibility that humans or other organisms could go extinct was viewed with scepticism.[90] It contradicted the principle of plenitude, a doctrine that all possible things exist.[90] The principle traces back to Aristotle, and was an important tenet of Christian theology.[91]: 121  The doctrine was gradually undermined by evidence from the natural sciences, particular the discovery of fossil evidence of species that appeared to no longer exist, and the development of theories of evolution.[91]: 121  In On the Origin of Species, Darwin discussed the extinction of species as a natural process and core component of natural selection.[92] Notably, Darwin was skeptical of the possibility of sudden extinctions, viewing it as a gradual process. He held that the abrupt disappearances of species from the fossil record were not evidence of catastrophic extinctions, but rather were a function of unrecognised gaps in the record.[92]

As the possibility of extinction became more widely established in the sciences, so did the prospect of human extinction.[90] Beyond science, human extinction was explored in literature. The Romantic authors and poets were particularly interested in the topic.[90] Lord Byron wrote about the extinction of life on earth in his 1816 poem ‘Darkness’, and in 1824 envisaged humanity being threatened by a comet impact, and employing a missile system to defend against it.[93] Mary Shelley’s 1826 novel The Last Man is set in a world where humanity has been nearly destroyed by a mysterious plague.[93]

Atomic era[edit]

Castle Romeo nuclear test on Bikini Atoll.

The invention of the atomic bomb prompted a wave of discussion about the risk of human extinction among scientists, intellectuals, and the public at large.[90] In a 1945 essay, Bertrand Russell wrote that "[T]he prospect for the human race is sombre beyond all precedent. Mankind are faced with a clear-cut alternative: either we shall all perish, or we shall have to acquire some slight degree of common sense."[94] A 1950 Gallup poll found that 19% of Americans believed that another world war would mean "an end to mankind".[95]

The discovery of 'nuclear winter' in the early 1980s, a specific mechanism by which nuclear war could result in human extinction, again raised the issue to prominence. Writing about these findings in 1983, Carl Sagan argued that measuring the badness of extinction solely in terms of those who die "conceals its full impact," and that nuclear war "imperils all of our descendants, for as long as there will be humans."[96]

Modern era[edit]

John Leslie's 1996 book The End of The World was an academic treatment of the science and ethics of human extinction. In it, Leslie considered a range of threats to humanity and what they have in common. In 2003, British Astronomer Royal Sir Martin Rees published Our Final Hour, in which he argues that advances in certain technologies create new threats for the survival of humankind and that the 21st century may be a critical moment in history when humanity's fate is decided.[22] Edited by Nick Bostrom and Milan M. Ćirković, Global Catastrophic Risks was published in 2008, a collection of essays from 26 academics on various global catastrophic and existential risks.[97] Toby Ord's 2020 book The Precipice: Existential Risk and the Future of Humanity argues that preventing existential risks is one of the most important moral issues of our time. The book discusses, quantifies, and compares different existential risks, concluding that the greatest risks are presented by unaligned artificial intelligence and biotechnology.[18]

See also[edit]

Notes[edit]

  1. ^ Schulte, P.; et al. (March 5, 2010). "The Chicxulub Asteroid Impact and Mass Extinction at the Cretaceous-Paleogene Boundary" (PDF). Science. 327 (5970): 1214–1218. Bibcode:2010Sci...327.1214S. doi:10.1126/science.1177265. PMID 20203042. S2CID 2659741.
  2. ^ Bostrom, Nick (2008). Global Catastrophic Risks (PDF). Oxford University Press. p. 1.
  3. ^ a b Ripple WJ, Wolf C, Newsome TM, Galetti M, Alamgir M, Crist E, Mahmoud MI, Laurance WF (November 13, 2017). "World Scientists' Warning to Humanity: A Second Notice". BioScience. 67 (12): 1026–1028. doi:10.1093/biosci/bix125.
  4. ^ Bostrom, Nick (March 2002). "Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards". Journal of Evolution and Technology. 9.
  5. ^ a b c "About FHI". Future of Humanity Institute. Retrieved August 12, 2021.
  6. ^ a b "About us". Centre for the Study of Existential Risk. Retrieved August 12, 2021.
  7. ^ a b "The Future of Life Institute". Future of Life Institute. Retrieved May 5, 2014.
  8. ^ a b "Nuclear Threat Initiative". Nuclear Threat Initiative. Retrieved June 5, 2015.
  9. ^ a b c d e f g Bostrom, Nick (2013). "Existential Risk Prevention as Global Priority" (PDF). Global Policy. 4 (1): 15–3. doi:10.1111/1758-5899.12002 – via Existential Risk.
  10. ^ Bostrom, Nick; Cirkovic, Milan (2008). Global Catastrophic Risks. Oxford: Oxford University Press. p. 1. ISBN 978-0-19-857050-9.
  11. ^ Ziegler, Philip (2012). The Black Death. Faber and Faber. p. 397. ISBN 9780571287116.
  12. ^ Muehlhauser, Luke (March 15, 2017). "How big a deal was the Industrial Revolution?". lukemuelhauser.com. Retrieved August 3, 2020.
  13. ^ Taubenberger, Jeffery; Morens, David (2006). "1918 Influenza: the Mother of All Pandemics". Emerging Infectious Diseases. 12 (1): 15–22. doi:10.1257/jep.24.2.163. PMC 3291398. PMID 16494711.
  14. ^ Posner, Richard A. (2006). Catastrophe: Risk and Response. Oxford: Oxford University Press. ISBN 978-0195306477. Introduction, "What is Catastrophe?"
  15. ^ Ord, Toby (2020). The Precipice: Existential Risk and the Future of Humanity. New York: Hachette. ISBN 9780316484916. This is an equivalent, though crisper statement of Nick Bostrom's definition: "An existential risk is one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development." Source: Bostrom, Nick (2013). "Existential Risk Prevention as Global Priority". Global Policy. 4:15-31.
  16. ^ Cotton-Barratt, Owen; Ord, Toby (2015), Existential risk and existential hope: Definitions (PDF), Future of Humanity Institute – Technical Report #2015-1, pp. 1–4
  17. ^ a b Bostrom, Nick (2009). "Astronomical Waste: The opportunity cost of delayed technological development". Utilitas. 15 (3): 308–314. CiteSeerX 10.1.1.429.2849. doi:10.1017/s0953820800004076. S2CID 15860897.
  18. ^ a b c d e f g h i j k l m n o p q r s Ord, Toby (2020). The Precipice: Existential Risk and the Future of Humanity. New York: Hachette. ISBN 9780316484916.
  19. ^ a b Bryan Caplan (2008). "The totalitarian threat". Global Catastrophic Risks, eds. Bostrom & Cirkovic (Oxford University Press): 504-519. ISBN 9780198570509
  20. ^ Glover, Dennis (June 1, 2017). "Did George Orwell secretly rewrite the end of Nineteen Eighty-Four as he lay dying?". The Sydney Morning Herald. Retrieved November 21, 2021. Winston's creator, George Orwell, believed that freedom would eventually defeat the truth-twisting totalitarianism portrayed in Nineteen Eighty-Four.
  21. ^ Orwell, George (1949). Nineteen Eighty-Four. A novel. London: Secker & Warburg.
  22. ^ a b Reese, Martin (2003). Our Final Hour: A Scientist's Warning: How Terror, Error, and Environmental Disaster Threaten Humankind's Future In This Century - On Earth and Beyond. Basic Books. ISBN 0-465-06863-4.
  23. ^ a b c Bostrom, Nick; Sandberg, Anders (2008). "Global Catastrophic Risks Survey" (PDF). FHI Technical Report #2008-1. Future of Humanity Institute.
  24. ^ a b c d Snyder-Beattie, Andrew E.; Ord, Toby; Bonsall, Michael B. (July 30, 2019). "An upper bound for the background rate of human extinction". Scientific Reports. 9 (1): 11054. Bibcode:2019NatSR...911054S. doi:10.1038/s41598-019-47540-7. ISSN 2045-2322. PMC 6667434. PMID 31363134.
  25. ^ "Frequently Asked Questions". Existential Risk. Future of Humanity Institute. Retrieved July 26, 2013. The great bulk of existential risk in the foreseeable future is anthropogenic; that is, arising from human activity.
  26. ^ a b Matheny, Jason Gaverick (2007). "Reducing the Risk of Human Extinction" (PDF). Risk Analysis. 27 (5): 1335–1344. doi:10.1111/j.1539-6924.2007.00960.x. PMID 18076500.
  27. ^ Asher, D.J.; Bailey, M.E.; Emel'yanenko, V.; Napier, W.M. (2005). "Earth in the cosmic shooting gallery" (PDF). The Observatory. 125: 319–322. Bibcode:2005Obs...125..319A.
  28. ^ a b c Rampino, M.R.; Ambrose, S.H. (2002). "Super eruptions as a threat to civilizations on Earth-like planets" (PDF). Icarus. 156 (2): 562–569. Bibcode:2002Icar..156..562R. doi:10.1006/icar.2001.6808.
  29. ^ Yost, Chad L.; Jackson, Lily J.; Stone, Jeffery R.; Cohen, Andrew S. (March 1, 2018). "Subdecadal phytolith and charcoal records from Lake Malawi, East Africa imply minimal effects on human evolution from the ∼74 ka Toba supereruption". Journal of Human Evolution. 116: 75–94. doi:10.1016/j.jhevol.2017.11.005. ISSN 0047-2484. PMID 29477183.
  30. ^ Sagan, Carl (1994). Pale Blue Dot. Random House. pp. 305–6. ISBN 0-679-43841-6. Some planetary civilizations see their way through, place limits on what may and what must not be done, and safely pass through the time of perils. Others are not so lucky or so prudent, perish.
  31. ^ Parfit, Derek (2011). On What Matters Vol. 2. Oxford University Press. p. 616. ISBN 9780199681044. We live during the hinge of history ... If we act wisely in the next few centuries, humanity will survive its most dangerous and decisive period.
  32. ^ Rowe, Thomas; Beard, Simon (2018). "Probabilities, methodologies and the evidence base in existential risk assessments" (PDF). Working Paper, Centre for the Study of Existential Risk. Retrieved August 26, 2018.
  33. ^ Leslie, John (1996). The End of the World: The Science and Ethics of Human Extinction. Routledge. p. 146.
  34. ^ Bostrom, Nick (2002), "Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards", Journal of Evolution and Technology, 9
  35. ^ Rees, Martin (2004) [2003]. Our Final Century. Arrow Books. p. 9.
  36. ^ a b Meyer, Robinson (April 29, 2016). "Human Extinction Isn't That Unlikely". The Atlantic. Boston, Massachusetts: Emerson Collective. Retrieved April 30, 2016.
  37. ^ Grace, Katja; Salvatier, John; Dafoe, Allen; Zhang, Baobao; Evans, Owain (May 3, 2018). "When Will AI Exceed Human Performance? Evidence from AI Experts". arXiv:1705.08807 [cs.AI].
  38. ^ Purtill, Corinne. "How Close Is Humanity to the Edge?". The New Yorker. Retrieved January 8, 2021.
  39. ^ "Will humans go extinct by 2100?". Metaculus. November 12, 2017. Retrieved August 12, 2021.
  40. ^ a b Sagan, Carl (Winter 1983). "Nuclear War and Climatic Catastrophe: Some Policy Implications". Foreign Affairs. Council on Foreign Relations. doi:10.2307/20041818. JSTOR 20041818. Retrieved August 4, 2020.
  41. ^ Jebari, Karim (2014). "Existential Risks: Exploring a Robust Risk Reduction Strategy" (PDF). Science and Engineering Ethics. 21 (3): 541–54. doi:10.1007/s11948-014-9559-3. PMID 24891130. S2CID 30387504. Retrieved August 26, 2018.
  42. ^ a b Cirkovic, Milan M.; Bostrom, Nick; Sandberg, Anders (2010). "Anthropic Shadow: Observation Selection Effects and Human Extinction Risks" (PDF). Risk Analysis. 30 (10): 1495–1506. doi:10.1111/j.1539-6924.2010.01460.x. PMID 20626690.
  43. ^ Kemp, Luke (February 2019). "Are we on the road to civilization collapse?". BBC. Retrieved August 12, 2021.
  44. ^ Ord, Toby (2020). The Precipice: Existential Risk and the Future of Humanity. ISBN 9780316484893. Europe survived losing 25 to 50 percent of its population in the Black Death, while keeping civilization firmly intact
  45. ^ a b Yudkowsky, Eliezer (2008). "Cognitive Biases Potentially Affecting Judgment of Global Risks" (PDF). Global Catastrophic Risks: 91–119. Bibcode:2008gcr..book...86Y.
  46. ^ Desvousges, W.H., Johnson, F.R., Dunford, R.W., Boyle, K.J., Hudson, S.P., and Wilson, N. 1993, Measuring natural resource damages with contingent valuation: tests of validity and reliability. In Hausman, J.A. (ed), Contingent Valuation:A Critical Assessment, pp. 91–159 (Amsterdam: North Holland).
  47. ^ a b Parfit, Derek (1984). Reasons and Persons. Oxford University Press. pp. 453–454.
  48. ^ MacAskill, William; Yetter Chappell, Richard (2021). "Population Ethics | Practical Implications of Population Ethical Theories". Introduction to Utilitarianism. Retrieved August 12, 2021.
  49. ^ Todd, Benjamin (2017). "The case for reducing existential risks". 80,000 Hours. Retrieved January 8, 2020.
  50. ^ Narveson, Jan (1973). "Moral Problems of Population". The Monist. 57 (1): 62–86. doi:10.5840/monist197357134. PMID 11661014.
  51. ^ Greaves, Hilary (2017). "Discounting for Public Policy: A Survey". Economics & Philosophy. 33 (3): 391–439. doi:10.1017/S0266267117000062. ISSN 0266-2671. S2CID 21730172.
  52. ^ Greaves, Hilary (2017). "Population axiology". Philosophy Compass. 12 (11): e12442. doi:10.1111/phc3.12442. ISSN 1747-9991.
  53. ^ Lewis, Gregory (May 23, 2018). "The person-affecting value of existential risk reduction". www.gregoryjlewis.com. Retrieved August 7, 2020.
  54. ^ Burke, Edmund (1999) [1790]. "Reflections on the Revolution in France" (PDF). In Canavan, Francis (ed.). Select Works of Edmund Burke Volume 2. Liberty Fund. p. 192.
  55. ^ Weitzman, Martin (2009). "On modeling and interpreting the economics of catastrophic climate change" (PDF). The Review of Economics and Statistics. 91 (1): 1–19. doi:10.1162/rest.91.1.1. S2CID 216093786.
  56. ^ Posner, Richard (2004). Catastrophe: Risk and Response. Oxford University Press.
  57. ^ a b Cotton-Barratt, Owen; Daniel, Max; Sandberg, Anders (2020). "Defence in Depth Against Human Extinction: Prevention, Response, Resilience, and Why They All Matter". Global Policy. 11 (3): 271–282. doi:10.1111/1758-5899.12786. ISSN 1758-5899. PMC 7228299. PMID 32427180.
  58. ^ "Mankind must abandon earth or face extinction: Hawking", physorg.com, August 9, 2010, retrieved January 23, 2012
  59. ^ Smil, Vaclav (2003). The Earth's Biosphere: Evolution, Dynamics, and Change. MIT Press. p. 25. ISBN 978-0-262-69298-4.
  60. ^ Lewis Smith (February 27, 2008). "Doomsday vault for world's seeds is opened under Arctic mountain". The Times Online. London. Archived from the original on May 12, 2008.
  61. ^ Suzanne Goldenberg (May 20, 2015). "The doomsday vault: the seeds that could save a post-apocalyptic world". The Guardian. Retrieved June 30, 2017.
  62. ^ "Here's how the world could end—and what we can do about it". Science | AAAS. July 8, 2016. Retrieved March 23, 2018.
  63. ^ Denkenberger, David C.; Pearce, Joshua M. (September 2015). "Feeding everyone: Solving the food crisis in event of global catastrophes that kill crops or obscure the sun" (PDF). Futures. 72: 57–68. doi:10.1016/j.futures.2014.11.008.
  64. ^ "Global Challenges Foundation | Understanding Global Systemic Risk". globalchallenges.org. Archived from the original on August 16, 2017. Retrieved August 15, 2017.
  65. ^ "Global Catastrophic Risk Policy |". gcrpolicy.com. Retrieved August 11, 2019.
  66. ^ Club of Rome (2018). "The Climate Emergency Plan". Retrieved August 17, 2020.
  67. ^ Club of Rome (2019). "The Planetary Emergency Plan". Retrieved August 17, 2020.
  68. ^ Korycansky, Donald G., Gregory Laughlin, and Fred C. Adams (2001). "Astronomical engineering: a strategy for modifying planetary orbits". Astrophysics and Space Science. 275 (4): 349-366.CS1 maint: multiple names: authors list (link)
  69. ^ Fred Hapgood (November 1986). "Nanotechnology: Molecular Machines that Mimic Life" (PDF). Omni. Archived from the original (PDF) on July 27, 2013. Retrieved June 5, 2015.
  70. ^ Giles, Jim (2004). "Nanotech takes small step towards burying 'grey goo'". Nature. 429 (6992): 591. Bibcode:2004Natur.429..591G. doi:10.1038/429591b. PMID 15190320.
  71. ^ Sophie McBain (September 25, 2014). "Apocalypse soon: the scientists preparing for the end times". New Statesman. Retrieved June 5, 2015.
  72. ^ "Reducing Long-Term {{subst:lc:Catastrophic}} Risks from Artificial Intelligence". Machine Intelligence Research Institute. Retrieved June 5, 2015. The Machine Intelligence Research Institute aims to reduce the risk of a catastrophe, should such an event eventually occur.
  73. ^ Angela Chen (September 11, 2014). "Is Artificial Intelligence a Threat?". The Chronicle of Higher Education. Retrieved June 5, 2015.
  74. ^ Alexander Sehmar (May 31, 2015). "Isis could obtain nuclear weapon from Pakistan, warns India". The Independent. Archived from the original on June 2, 2015. Retrieved June 5, 2015.
  75. ^ "About the Lifeboat Foundation". The Lifeboat Foundation. Retrieved April 26, 2013.
  76. ^ Ashlee Vance (July 20, 2010). "The Lifeboat Foundation: Battling Asteroids, Nanobots and A.I." New York Times. Retrieved June 5, 2015.
  77. ^ "Global Challenges Foundation website". globalchallenges.org. Retrieved April 30, 2016.
  78. ^ Nick Bilton (May 28, 2015). "Ava of 'Ex Machina' Is Just Sci-Fi (for Now)". New York Times. Retrieved June 5, 2015.
  79. ^ "About Us". Center on Long-Term {{subst:lc:Risk}}. Retrieved May 17, 2020. We currently focus on efforts to reduce the worst risks of astronomical suffering (s-risks) from emerging technologies, with a focus on transformative artificial intelligence.
  80. ^ Hui, Sylvia (November 25, 2012). "Cambridge to study technology's risks to humans". Associated Press. Archived from the original on December 1, 2012. Retrieved January 30, 2012.
  81. ^ Scott Barrett (2014). Environment and Development Economics: Essays in Honour of Sir Partha Dasgupta. Oxford University Press. p. 112. ISBN 9780199677856. Retrieved June 5, 2015.
  82. ^ "Millennium Alliance for Humanity & The Biosphere". Millennium Alliance for Humanity & The Biosphere. Retrieved June 5, 2015.
  83. ^ Guruprasad Madhavan (2012). Practicing Sustainability. Springer Science & Business Media. p. 43. ISBN 9781461443483. Retrieved June 5, 2015.
  84. ^ "Center for International Security and Cooperation". Center for International Security and Cooperation. Retrieved June 5, 2015.
  85. ^ a b https://www.facebook.com/profile.php?id=1216916378. "Georgetown launches think tank on security and emerging technology". Washington Post. Retrieved March 12, 2019.
  86. ^ "Global Alert and Response (GAR)". World Health Organization. Archived from the original on February 16, 2003. Retrieved June 5, 2015.
  87. ^ Kelley Lee (2013). Historical Dictionary of the World Health Organization. Rowman & Littlefield. p. 92. ISBN 9780810878587. Retrieved June 5, 2015.
  88. ^ "USAID Emerging Pandemic Threats Program". USAID. Archived from the original on October 22, 2014. Retrieved June 5, 2015.
  89. ^ "Global Security". Lawrence Livermore National Laboratory. Retrieved June 5, 2015.
  90. ^ a b c d e Moynihan, Thomas (2020). X-Risk: How Humanity Discovered Its Own Extinction. Urbanomic. ISBN 978-1913029845.
  91. ^ a b Darwin, Charles; Costa, James T. (2009). The Annotated Origin. Harvard University Press. ISBN 978-0674032811.
  92. ^ a b Raup, David M. (1995). "The Role of Extinction in Evolution". In Fitch, W. M.; Ayala, F. J. (eds.). Tempo And Mode in Evolution: Genetics And Paleontology 50 Years After Simpson. National Academies Press (US).
  93. ^ a b Moynihan, Thomas (2019). "The end of us". Aeon. Retrieved August 14, 2020.
  94. ^ Russell, Bertrand (1945). "The Bomb and Civilization". Archived from the original on August 7, 2020.
  95. ^ Erskine, Hazel Gaudet (1963). "The Polls: Atomic Weapons and Nuclear Energy". The Public Opinion Quarterly. 27 (2): 155–190. doi:10.1086/267159. JSTOR 2746913.
  96. ^ Sagan, Carl (January 28, 2009). "Nuclear War and Climatic Catastrophe: Some Policy Implications". doi:10.2307/20041818. JSTOR 20041818. Retrieved August 11, 2021.
  97. ^ Bostrom, Nick; Ćirković, Milan M., eds. (2008). Global catastrophic risks. Oxford University Press. ISBN 978-0199606504.

Further reading[edit]

External links[edit]