User:TeddyLiu/sandbox
Summary of literature on human survivability as of May 2013
[edit]Global Catastrophic Risks, Oxford, 2008
[edit]This book of 554 pages, has 25 authors of 22 chapters on a wide range of topics. Its editors are Nick Bostrom and Milan Ćirković. It was reviewed by David Aldous[1], FRS, department of statistics at University of California, Berkeley, who rates it 5 stars.[2] He praises a chapter by Eliezer Yudkowsky on cognitive biases in an individual's risk assessments. He pans a chapter on social collapse.
Descriptive literature
[edit]The following authors describe existential risks with little or no attempt to assign numbers to the risks or to predict survival times:
Joy is co-founder of Sun Microsystems, inventor of the Java computer language, and major contributor to cybertechnology. He fears the consequences of this technology, essentially robots taking control of our world. Joy favors relinquishing progress in GNR, i.e. genetics, nanotechnology, and robotics, and for this conviction has been called a Neo-Luddite. He wrote an essay, ″Why the Future Doesn't Need Us″ published in Wired magazine. He is aware of the difficulties: if any one of 200 nations encourages a forbidden technology, its advocates will gravitate to that nation and pursue it. This has happened already in the US when the federal government restricted research with human embryos.[3]
Fred Guterl
[edit]Guterl is executive editor of Scientific American. His book, The Fate of the Species, (Bloomsbury, 2012) includes only the orthodox threats, those that are widely discussed in the media: superviruses, big natural events (asteroid strikes, volcanoes, and the like), climate change, ecosystem degradation, synthetic biology, machines and artificial intelligence. He approaches these subjects like a journalist describing pertinent real-life events and interviewing the people involved. Kirkus Reviews describes this book as ″an intelligent account of the mess we are making of the planet; the unsettling conclusion: that humans may survive because we are resilient, not because we can fix matters.″
Posner wrote Catastrophe: Risk and Response, Oxford, 2004. His daytime job is judge of the US Court of Appeals, 7th Circuit, and indeed his book emphasizes judicial aspects of many issues that the other authors overlook. For example, what sort of laws or constitutional amendment do we need to give the president powers to bypass Congress and thereby respond very quickly to an existential threat?
Posner recognizes four classes of catastrophe:
- natural ones such as asteroid strikes and pandemics
- scientific and high-tech accidents such as particle accelerators, nanotechnology, and artificial intelligence
- unintentional man-made catastrophes, climate change and loss of biodiversity
- intentional man-made, i.e. terrorism
Posner chooses one example from each class:
- asteroid strike — a poor choice in the opinion of those scholars who claim that purely natural (no human involvement) risks are negligible compared to those that are man-made or man-aggravated. An example of the latter is a pandemic spread by modern transportation, which makes quarantine ineffective. More discussion below.
- particle accelerator disaster — perhaps a diminished risk following the uneventful confirmation of the Higgs boson in March 2013. See also strangelets and black hole production.
- global warming
- bioterrorism — perhaps the greatest risk; see Rees below.
Posner follows each of these examples throughout his four chapters:
- What are the catastrophic risks, and how bad are they?
- Why is so little being done about them?
- How to evaluate catastrophic risks and possible responses
- How to reduce the risks
Unlike the other studies described here, Judge Posner puts strong emphasis on cost-benefit analysis. Each threat has some probability of happening and some number of casualties if it does. The product of the two is called the expected number of casualties. So divide the cost of prevention by the expected number to get the cost/benefit ratio. Posner would allocate resources where this ratio is least. He advocates expensive public programs to mitigate the risk to humankind. He is aware that persuading taxpayers to bear the expense of intervention will be difficult since the hazards have not yet frightened anyone and may not be lethal during the taxpayers' lifetimes.
Posner does not shrink from topics that many find distasteful. For example cost-benefit analyses with human life at stake require a monetary value for human life, and Posner describes ways to estimate this; estimates vary, but typically $3 million. He also considers prediction markets as a means to predict terrorism and thus be prepared, but he is concerned that terrorists could manipulate such markets. Under ″extreme police measures″ he discusses the justifications for torture of terrorists and reprisals against their families.
To preclude catastrophe, Posner makes a plea for science education for lawyers, science courts, international agencies, compulsory reviews of proposed projects for their catastrophic risk potential, and so on. He criticizes civil libertarians for opposing any legitimate loss of liberty as required for security. He also criticizes conservatives for denial of global warming. In discussing this he dwells on the culture shock between scientists and lawyers. Examples of this appear below in the sections on Casti's and Wells' views, some of which conflict with Posner′s.
John Casti
[edit]Casti is a complexity scientist, systems theorist and author of X-Events: The Collapse of Everything (William Morrow, 2012). X in the title denotes extreme, for example, existential. One reviewer calls Casti an optimist of the apocalypse[4]. Another reminds us of the saying ″too big to fail″ often heard during the Financial crisis of 2007–08,[5]. Kirkus[6] also provides a review.
Casti writes, "The distilled essence of my message ... is that complexity overload is the precipitating cause of X-events." Other main points:
"Generally speaking, the best solution for solving a complexity mismatch is to simplify the system that's too complex ... rather than to "complexify" the simpler system. So, for example, in the case of financial markets, it would be vastly preferable to eliminate ... exotic financial instruments that nobody really understands rather than to ... beef up the ... rules and regulations to control them."
The following quote applies to Posner and others who would safeguard humanity by institutional means: laws, regulations, treaties, and the like: "The addition of safety checks often contributes to the complexity and thus can actually work against the reliability of the system instead of enhancing it."
The bulk of X-Events is discussions of 11 specific ones: (1) failure of the Internet, (2) breakdown of the food supply, (3) A continent-wide electromagnetic pulse destroying all electronics, (4) collapse of globalization, (5) death by physics, i.e. exotic particles, (6) destabilization of the nuclear threat, (7) petroleum exhaustion, (8) global pandemic, (9) failure of electric power grid, (10) Intelligent robots overthrowing humanity, and (11) collapse of world financial markets.
Lord Rees is author of Our Final Century, Arrow Books, 2004. He is England's Astronomer Royal, professor at Cambridge University, past president of the Royal Society, and past master of Trinity College. He thinks that the odds are no better than 50-50 that our present civilization on Earth will survive another 100 years. He has wagered $1000 that by the year 2020 a single instance of bioerror or bioterror will have killed a million people.
Following are examples of Rees' observations:
- Routine vaccination for smallpox stopped long ago, but stocks of the virus still exist. Its incubation period is 12 days, so if a terrorist sprayed it around a busy airport, secondary infections would spread throughout the world before the pandemic would be recognized. Quarantine would be ineffective, and billions would likely die.
- The Internet promotes fanaticism by providing so many sources of news that each potential fanatic can find a source that tells him just what he wants to hear purged of information that would challenge his prejudices. Moreover, fanatics can find one another on the Internet and thereby organize conspiracies.
- Intrusive surveillance may be the least-bad safeguard. To be acceptable it must work two ways allowing the public to spy on government as well as one another. David Brin has written about this, The Transparent Society.
- Numerous commentators have observed that the ultimate way to insure human survival is to have redundant habitats in space, so if one expires, one or more others survive. The habitat may be an O'Neill cylinder, a huge cylinder rotating to generate artificial gravity so that people can live normally on its inside surface. A big obstacle is the energy (fuel) required to escape from Earth to high orbit. We need a propulsion system powered from the earth's surface so that the craft need not carry rocket fuel as a big fraction of its cargo. A space elevator is theoretically possible but not yet feasible. It would be a geostationary satellite tethered to the earth's surface with a car to carry freight up and down the tether cable. Some variants of the cylinder such as the Stanford torus would house tens of thousands of inhabitants, but Rees thinks these are too vulnerable to a single act of sabotage. A dispersed set of smaller habitats would be more robust.
Her book, Scatter, Adapt, and Remember: How Humans Will Survive a Mass Extinction (Doubleday, 2013) is the latest of the books discussed here. It has been reviewed by Scientific American[7] and by Kirkus[8]
Newitz reviews mass extinctions all the way back to the Permian, 250 million years ago, and the four other major ones, including the dinosaur killer 65 million years ago. She reviews the strategies by which life has survived noting that humans have advantages over other animals: ″we can live almost anywhere and eat nearly anything, and we tell stories, which contain experiences that will help save us. We are also able to wander, like our besieged ancestors fanning out of Africa 70,000 years ago, to fit in elsewhere.″ Her scope includes the Jews surviving the Diaspora, and much more. Newitz′ approach is journalistic including many interview with experts. All of this is condensed into 263 pages and concludes, ″There may be horrific disasters, and many lives will be lost. But don’t worry. As long as we keep exploring, humanity is going to survive.″
Global Catastrophic Risks Survey
[edit]Technical Report 2008/1 by Anders Sandberg and Nick Bostrom, Future of Humanity Institute, Oxford University
This is the summary of a conference held at Oxford in July 2008. The main hazards considered were war (including civil and nuclear), molecular nanotech weapons, artificial intelligence, genetically engineered pandemic, nanotech accident, natural pandemic spread by man-made transportation, and nuclear terrorism. The report includes a survey of participating experts asking their opinions about the risks. Their ultimate summary number is the median estimate of the probability of extinction prior to 2100, namely 19%.
Analytic literature
[edit]The following authors emphasize the logic and mathematics for evaluating risks, and they give numerical estimates for the survival of civilization and/or humankind.
Leslie is a professor emeritus of philosophy and author of The End of the World: The science and ethics of human extinction (Routledge, 1996). He thinks the probability of extinction is about 30% after 5 centuries. A review appears in Population and Development Review,[9] a journal of the Population Council
Leslie supports Brandon Carter, and his ″weak anthropic principle,″ and especially one application of that principle, the doomsday argument. The anthropic principle is almost a truism, except that people often forget to apply it. It says that our observations of the physical universe must take into account the observer′s (typically ourselves) position therein (both space and time), and any selection bias that results from that position.
The doomsday argument admits that we may have been born extremely early in the duration of the human race, or extremely late, but not likely. We were probably born somewhere in the big middle. If future humanity prospers, builds space ships, and colonizes our galaxy, then we shall have been born in the extreme beginning of humanity's time. But since this is unlikely, doomsday will probably arrive before we can do those things. Many scholars have attacked this argument. In his Chapter 5 Leslie divides these arguments into four groups and refutes them. More details appear in his Chapter 6.
Much of Leslie's book is more conventional: the introduction and Chapters 1 and 2 enumerate the various hazards, much the same ones others have discussed. He discusses redundant off-Earth habitats including O'Neill's cylinder, which is also described in Rees' book, see above. Leslie thinks that redundant habitats will prevent extinction five centuries from now if only we can survive the interim. He briefly mentions artificial biospheres, a word that originated with the failed experiment in Arizona, Biosphere 2, (Earth itself being Biosphere 1). Leslie deplores the absurdity of nations' funding priorities: "If 1/100 as much had been spent on developing artificial biospheres as on making nuclear weapons, a lengthy future for humankind might by now be virtually assured."
Leslie discusses Fermi's paradox, named for Enrico Fermi's famous question, "Where are they?" He was referring to the total absence of any trace of extraterrestrial intelligence or their artifacts of astroengineering even though our galaxy alone contains 100-400 billion stars, many of which surely have planets capable of supporting life. One plausible explanation for their absence is that exo-humanoids typically commit self-extinction through advanced technology before they are ready to explore the cosmos, just as we may be doing now.
There are two ways to analyze human survival, top-down and bottom-up. Bottom-up begins with a list of threats and then synthesizes the resultant overall risk. Top-down finds some principle that transcends individual threats, thus avoiding the need to make a complete list of them.
Gott is a professor of astrophysics at Princeton U. His top-down analysis begins with an observer (you or me) who determines the age A of some entity (Homo sapiens in our case). If there is nothing special about the time of observation, a reasonable best estimate would assume that the observer arrives at a random time within the life of the entity, all such times being equally likely. In this case Gott shows that given A, the probability the entity will be alive at future F is
- Prob(F|A) = A/(A+F).
This formula gives 1.0 at the time of observation (F=0) and then decays to zero in the infinite future. It works for many entities such as stage plays. But then Gott applies his formula to mankind using A = 2000 centuries, just as though there were nothing special about the present. However, most other scholars listed here think our time is very special because dangerous new technology is proliferating at an increasing rate, and world population has soared to unprecedented numbers. The observer is more likely to have appeared recently in human history simply because most people have lived recently. Either argument invalidates Gott′s assumption and put us not at a random fraction of humanity′s lifetime, but much closer to the end.
Willard Wells
[edit]Wells is a theoretical physicist and mathematician. He shows that Gott′s formula will apply to humanity if we interpret A and F not as calendar time but as some measure of cumulative risk exposure. (The two interpretations are identical in the special case where the risk rate is constant in time.) Wells' intuitive formulation does not meet the rigorous standards of pure mathematics; however, two professors of math and computer science approve, namely John J. Watkins[10] and S. Gill Williamson[11] [12]. To further substantiate his formulation, Wells studied survival statistics of business firms and stage productions and showed that the formula applies with rare exceptions.
Next, Wells estimates humanity's risk exposure based on measures of dangerously rapid ″progress″, such as statistics of U.S. patents issued, number of papers published in science and engineering, the number of pages published in Nature magazine, and gross world product. In order to estimate future risk exposure F, he develops equations that extrapolate these statistics into the future. Ultimately he derives a best estimate for survival probability expressed as a mathematical formula. Unfortunately, he does not provide error limits on the parameters in this formula.
Wells explains that his accuracy is degraded by some issues of statistical weight and that his formulation contains simplifying assumptions that do not exactly match physical reality. His results could be off by a factor of two, which he regards as decent accuracy for a quantity as slippery as human survival. With that caveat, Wells estimates current risk rates at 9% per decade for the collapse of civilization and 3%/decade for extinction. Again, he does not offer error limits.
As long as modern technology survives and proliferates, our world is too complex to estimate survivability beyond a century more or less. During this period Wells considers the risk from all purely natural hazards to be negligibly small. By ″purely″ natural he means not exacerbated by human activity; for example, smallpox is natural, but now modern transportation would spread it so fast that quarantine would be ineffective just as Rees describes. Although purely natural events do occur and cause great damage, they have not been a threat to human survival since the enormous volcanic explosion of Indonesia's Mt. Toba 740 centuries ago. (In this regard see Casti, X-events, p.20). Wells claims that after surviving natural hazards for 740 centuries, the chance of succumbing during the very next century is only about 1/740. Contrast this minuscule risk with Posner's choice of an asteroid strike as one of his four paradigms.
Wells is wary of bottom-up analysis of survivability simply because no one has a complete list of all possible hazards. To make this point, he conjures up a couple of plausible extinction scenarios that nobody else has discussed in the literature. Ironically, as explained below, Wells' numerical results compare rather well with the scholars who do take the bottom-up view. Perhaps they made intuitive allowances for hazards that nobody has identified, or perhaps Wells was not perfectly objective while adjusting his formula for risk exposure using published statistics.
Numerical survivability
[edit]Five of the scholars estimate numerical probabilities of survival. For the reason explained above, Gott′s is the outlier by far: 97.5% chance of survival for 51 centuries. Three more of them give a single number for a single quantity, a different quantity in each case. However, Wells′ formula is capable of calculating each of these three quantities, and so it provides a means to quantify their agreement.
Rees estimates 50% probability that civilization will suffer a major setback during the next 100 years. For comparison, Wells' formula gives civilization's half-life as 9 billion people centuries. This agrees with Rees if the average population is 9 billion, a plausible number.
Leslie thinks the probability of extinction is about 30% after 5 centuries. The formula gives 25%. The main reason why extinction is so improbable is that civilization will likely collapse prior to extinction and destroy the man-made hazards, the sources of the major risks.
A poll of experts at a conference sponsored by Oxford's Futures of Humanity Institute asked the chance of extinction during the next 100 years. Their median answer was 19%. Wells' formula gives 19% after 10 billion people centuries. So the two numbers agree if the average population is 10 billion people, again a plausible number; compare 9 billion to agree with Rees above.
These three comparisons are so close that they can only be coincidental unless Wells was not fully objective in choosing statistics for extrapolating risk exposure. He admits that his answers might differ from the true risk by a factor of about two, which would be decent agreement for a quantity as fuzzy as human survival.
What to do about our vulnerability
[edit]Posner has the most concrete recommendations for responses to risks mainly in his Chapter 4, "How to reduce the catastrophic risks." Joy proposes that we relinquish dangerous lines of investigation. Rees suggests that intrusive surveillance may be the least-bad safeguard. His chapter 6, "Slowing science down?" recognizes that a slower pace would allow more time for us to adapt safety to dangerous new technologies. These three scholars all acknowledge difficult political issues in establishing and enforcing such precautions.
Leslie deplores civilization's failure to develop artificial "biospheres". If one defines a biosphere as an airtight self-sufficient enclosure powered by sunlight, then it is surely effective protection against airborne microorganisms, toxic gas, and artificial infestations such as robo-locusts, mosquito-bots, grey goo, and the like. However, a structure similar to Biosphere 2 is vulnerable to global warming and extreme weather such as tornadoes. Wells notes that an effective redoubt[13] against global warming and/or nuclear winter could be a rather ordinary structure in the extreme south. All nuclear weapons and all their targets are in the Northern Hemisphere, and cold Antarctic oceans provide an enormous reservoir for excess heat since water has high heat capacity.
When a survivable cataclysm occurs, safeguards will suddenly dominate the public interest. Until then Wells claims that knowledgeable people who understand our situation wield too little political power to bring about effective safeguards by institutional means as Posner proposes or construction of biospheres that Leslie wants. But if the next event is the big one, something not survivable, and if Wells' opinion is correct, then humanity will be unprepared and extinction may follow. He is unwilling to take that chance. He recommends that the knowledgeable minority save humanity by saving themselves. They should form survival colonies and build redoubts where they can ride out several of the most likely catastrophes. They must be more sophisticated than traditional survivalists (aka doomsday preppers) who build hideaways, and stash them with a deep larder, plenty of water, and firearms. For example, the neo-preppers must anticipate high-tech hazards and have enough people in each colony to comprise a viable breeding stock (about 100) in case they cannot find other survivors. Indeed, wealthy people are already buying luxury apartments in converted missile silos. [14]
Beyond Earth
[edit]Innumerable commentators have stated that redundant off-Earth habitats would be the ultimate insurance against extinction. Of the authors included here, only Rees devotes a full chapter to this subject as described above. Rees and Leslie both discuss O'Neill's cylinder.
Gott stresses the importance colonies in space, including Mars. At the end of his book Time Travel in Einstein's Universe Gott states, "The goal of the human spaceflight program should be to increase our survival prospects by colonizing space."
Wells thinks more like Hans Moravec. He wants to develop humanoids with artificial intelligence (androids?) that are programmed at every level of their intellect to love humanity and to nurture us. They should be acceptable to us as our descendents, or Mind Children, which is the title of a book by Moravec. These artificial descendants will be designed with spaceworthy bodies, especially the ability to hibernate for the duration of interstellar travel. Moreover, the group going to a particular exoplanet will be custom designed for compatibility with that planet's environment.
Wells does not want bio-humanity stuck in a habitat where we are pitifully maladapted and dependent on an agriculture that is equally maladapted. And he does not want to pass up this opportunity to purge humanity of disease, genetic defects, design defects, and pain. Wells thinks that bio-humans may travel to the stars as frozen embryos in the care of robo-nannies. There we would live as pampered pets in comfortable zoos with climate control.
References
[edit]- ^ David Aldous, home page at UC Berkeley
- ^ Aldous′ review
- ^ Without aid, work moving overseas, article in The Boston Globe, May 2004
- ^ J. Franz Spiegel, The Vienna Review, July/August 2012
- ^ Prairie Progressive 11 June 2012
- ^ Kirkus review of Casti's book
- ^ Review of Newitz' book, Scientific American
- ^ Kirkus review of Newitz
- ^ Geoffrey McNicoll, Popul. Dev. Rev. Vol 23, #4, pp. 905–8, Dec. 1997, also online
- ^ John J. Watkins, Apocalypse When?, Mathematical Intelligencer, vol.34, #2, pp.71-2, also online
- ^ Williamson's faculty page
- ^ Williamson's review of Wells' book
- ^ The case for survival colonies
- ^ Redoubts for the wealthy
________________________________________________________
Annotations added to last part of the talk page
[edit]Looking at your edit today, some suggestions:
- Please organize by concept instead of by book like in Green Cardamom's suggested Decline of the Roman Empire#Theories of a fall, decline, transition and continuity
- Adopt WP:NPOV, stick to summarizing the cited opinions and books instead of telling us which ones are right
- Where did I tell which ones are right?
- As one example of many, Wells "conjures up a couple of plausible extinction scenarios that nobody else has discussed in the literature." I don't know what these scenarios are, but it's a POV that they are plausible. I guess we could explain the scenarios and allow readers to decide for themselves if they're plausible, but we're already giving [WP:UNDUE] weight to a rather obscure book in this article.
- Where did I tell which ones are right?
- Reviews of books should go on a separate page for the book being reviewed. Omit reviews from amazon.com users, even if they're professors.
- Re Williamson, I reluctantly agree, but Aldous is not just any professor, he's a Fellow of the Royal Society.
- Just looked up, there are over 1000 Fellows. Also look at other articles about science books for an idea of what's considered notable, if you find any other Amazon reviews cited let me know so I can either change my mind or delete them as well.
- Re Williamson, I reluctantly agree, but Aldous is not just any professor, he's a Fellow of the Royal Society.
IMHO omit blogger reviews as well, only go with notable reviews from published magazines, journals, and papers. For example, Global Catastrophic Risks seems to have been reviewed in Nature; so Global Catastrophic Risks could get its own small page and a link to the review could be included there. (http://www.nature.com/nature/journal/v455/n7214/full/455732b.html?free=2)
- Yes, I'll include the Nature review.
- There's a separate page already for the Doomsday argument.
- Yes, already have a link to it.
- I was unclear. I'm proposing that most of the text on Gott and maybe Wells should go in that article, with maybe briefer summaries in this article. Any summaries shouldn't give WP:UNDUE weight to Gott and Wells over the many peer-reviewed articles discussing the DA.
- Yes, already have a link to it.
Rolf H Nelson (talk) 04:52, 19 June 2013 (UTC)
→Reverted the edit until it can get more discussion. Also, TeddyLiu, are you Wells, or someone with a connection to Wells? If so you should disclose that. Rolf H Nelson (talk) 05:15, 19 June 2013 (UTC)
- My connection to Wells is that I read his book first, so it became my basis for comparisons. As an optimist by nature I was disturbed by his calculation of very high risk of a global catastrophic event. So I slogged thru his math and tried to refute it, but could not. Wells had firm support at every step of his argument. So I turned to other authors. Some gave numerical risks, and those were very similar although the their methods were entirely different.
- There is another reason to use Wells as a basis for comparison. Rees gives the half life of civilization; Leslie says 30% chance of extinction in in 5 centuries; the Oxford conference says 19% in 100 years. All somewhat different quantities (apples and oranges). By contrast, Wells formulas and graphs are flexible enough that I could compare to each of the other authors. Please don't tell me that I'm doing original research when I read numbers off Wells' published graphs. That would be an outlandish definition of research.
- I agree with Rolf H Nelson on all points, including the similarity to and POV of Willard Wells, who was here a few months ago as Will9194 (talk · contribs), proposing similar changes. As Rolf said, if you have any connection to Wells, you should disclose it immediately.
- Additional notes on your proposed addition:
- Lose the "as of May 2013"; don't ask the reader questions; avoid "More discussion below" and similar (the text "below" may later change); don't link section titles (as you did with many authors' names); in general, see WP:Manual of Style
- Don't understand linking section titles or reason not to.
- We could work with summaries by author, but they should be shorter, more closely grouped by concept, and more focused on the concepts, rather than the authors themselves.
- Have other major publications categorized the relevant literature as either "Descriptive" or "Analytic"? If not, we should probably avoid doing so.
- I doubt it. The books fall naturally into two categories, which logically should go into two subsections. Then I need titles for the subsections, so what am I supposed to do? I picked two descriptive words that carry no emotional or political baggage. If you can't allow me that modest bit of creativity, then I can't work with you guys. I'll just have to find some other medium to warn the human race.
- Ensure the works you cite are reliable sources. as Rolf indicated, only about half of the "references" you provided are reliable sources, and even then, some are being used inappropriately.
- At over 70kB, this is already a long article. Your addition brings it to nearly 100kB. The rule of thumb is that at these lengths, we should be more concerned with splitting the article apart, rather than adding to it. Make any additions as concise as possible.
- I agree this article is too long as do others on the talk page, most recently the unsigned #27 in the TOC. He wants to split it by threats to civilization and human extinction, but that won't work. They are almost exactly the same threats. Whether they take out civilization or the human species is just a matter of severity, maybe only a small change.
- There should be one article for readers who want to relax with light reading about the distant past or future: how the sun will explode in some billions of years and envelope Earth's orbit, how Mercury could become unstable and wander into Earth's orbit, what a super-volcano did 74,000 years ago, and so forth.
- The second article should be for readers worried about imminent threats: global warming, artificial viruses, artificial intelligence, etc. Some of these readers are struggling with serious life decisions. Should they spend their life's savings to organize a survival colony and build a bunker in some remote place (neo-preppers)? Should they bring children into this world at such a risky time? Etc.
- There are not nearly enough references, which indicates that large portions of your text is original research. Further, the style and tone suggests original synthesis. We cannot accept either. I even see direct quotations without attribution.
- Whoa! Each of the main sections tells about a different publication, mostly books. And that publication is the implied reference for everything in that section with only a few exceptions to add to the reference list at the end. I like Green Cardamom's suggested organization in part because it simplifies the attribution.
- Be especially wary of making "connections" between (or even comparing) unrelated works (which have not already been made in reliable sources).
- These are obvious comparisons that readers would see for themselves if they looked up the original sources. The whole idea of encyclopedia is to simplify the reader's life so he doesn't need to look up all the original stuff.
- We do not deal in the "ironic", "coincidental", etc.
- You're clearly well read on this topic, and I'm sure this page could benefit from your work, but the addition you've presented is unacceptable. Please look through our core content policies and manual of style, and continue editing it in your sandbox. Mysterious Whisper 12:48, 19 June 2013 (UTC)
- With all its 70kB the existing article doesn't make the most important point: four expert scholars agree that humankind is in grave jeopardy on a time scale of 100 years. Can't we just put the word out? I've seen many WP articles with a note up front listing deficiencies and asking readers to correct them. Can't we do that now? Why should I be stuck with all the work?
- See WP:SOAP: "You might wish to start a blog or visit a forum if you want to convince people of the merits of your opinions." For example, googling for "existential risk" finds forums such as lifeboat.com or lesswrong.org, and personal opinions about it would be on-topic at any catch-all venues such as TED, Reddit, etc. And, if you publish your opinions in the peer-reviewed literature, then they would become more Notable and, under appropriate circumstances, be suitable for inclusion. But Wikipedia isn't directly capable of peer review, and so my and your personal opinions don't carry weight here. --RHN
- As far as "four expert scholars agree", to avoid risk of WP:Synthesis they could all be quoted individually in a single paragraph I guess. Getting four expert scholars out of the millions of expert scholars who exist to agree on something isn't terribly impressive though, unless they're exceptionally prominent ones like Hawking, Rees, or Posner. -RHN
- With all its 70kB the existing article doesn't make the most important point: four expert scholars agree that humankind is in grave jeopardy on a time scale of 100 years. Can't we just put the word out? I've seen many WP articles with a note up front listing deficiencies and asking readers to correct them. Can't we do that now? Why should I be stuck with all the work?
signed by TeddyLiu