Why The Future Doesn't Need Us

From Wikipedia, the free encyclopedia
  (Redirected from Why the future doesn't need us)
Jump to navigation Jump to search

"Why The Future Doesn't Need Us" is an article written by Bill Joy (then Chief Scientist at Sun Microsystems) in the April 2000 issue of Wired magazine. In the article, he argues (quoting the sub title) that "Our most powerful 21st-century technologies—robotics, genetic engineering, and nanotech—are threatening to make humans an endangered species." Joy warns:

While some critics have characterized Joy's stance as obscurantism or neo-Luddism, others share his concerns about the consequences of rapidly expanding technology.[1]


Joy argues that developing technologies provide a much greater danger to humanity than any technology before has ever presented. In particular, he focuses on genetic engineering, nanotechnology and robotics. He argues that 20th-century technologies of destruction such as the nuclear bomb were limited to large governments, due to the complexity and cost of such devices, as well as the difficulty in acquiring the required materials. He uses the novel The White Plague as a potential nightmare scenario, in which a mad scientist creates a virus capable of wiping out humanity.

Joy also voices concern about increasing computer power. His worry is that computers will eventually become more intelligent than we are, leading to such dystopian scenarios as robot rebellion. He notably quotes Ted Kaczynski (the Unabomber) on this topic.


In The Singularity Is Near, Ray Kurzweil questioned the regulation of potentially dangerous technology, asking "Should we tell the millions of people afflicted with cancer and other devastating conditions that we are canceling the development of all bioengineered treatments because there is a risk that these same technologies may someday be used for malevolent purposes?". However John Zerzan and Chellis Glendinning believe that modern technologies are bad for both freedom and the problem of cancer, and that the two issues are connected.[2][3][4]

In the AAAS Science and Technology Policy Yearbook 2001 article "A Response to Bill Joy and the Doom-and-Gloom Technofuturists", Bill Joy was criticized for having technological tunnel vision on his prediction by failing to consider social factors.[5]

John McGinnis argues that Joy's proposal for "relinquishment" of technologies that might lead to artificial general intelligence (AGI) would fail because "prohibitions, at least under current technology and current geopolitics, are certain to be ineffective". Verification of AGI-limitation agreements would be difficult due to AGI's dual-use nature and ease of being hidden. Similarly, Joy's "Hippocratic oath" proposal of voluntary abstention by scientists from harmful research would not be effective either, because scientists might be pressured by governments, tempted by profits, uncertain which technologies would lead to harm down the road, or opposed to Joy's premise in the first place. Rather than relinquishment of AGI, McGinnis argues for a kind of differential technological development in which friendly artificial intelligence is advanced faster than other kinds.[6]

Extropian futurist Max More shares Kurzweil's viewpoint on matters of the impractical and ineffective nature of "technological relinquishment," but adds a larger moral and philosophical component to the argument, arguing that the perfection and evolution of humanity is not "losing our humanity" and that voluntarily-sought increased capacity in any domain does not even represent "a loss" of any kind.[7]


After the publication of the article, Bill Joy suggested assessing technologies to gauge their implicit dangers, as well as having scientists refuse to work on technologies that have the potential to cause harm.

In the 15th Anniversary issue of Wired in 2008, Lucas Graves's article reported that the genetics, nanotechnology, and robotics technologies have not reached the level that would make Bill Joy's scenario come true.[8]


  1. ^ Khushf, George (2004). "The Ethics of Nanotechnology: Vision and Values for a New Generation of Science and Engineering", Emerging Technologies and Ethical Issues in Engineering, National Academy of Engineering, pp. 31–32. Washington, DC: The National Academies Press. ISBN 030909271X
  2. ^ Zerzan, John (31 October 2002). "What Ails Us?". Green Anarchy. Federated Anarchy Inc (10). Retrieved 5 March 2012.
  3. ^ "Age of Grief". Primitivism.com. Retrieved 2009-07-08.
  4. ^ Cohen, Mark Nathan (1991). "Health and the Rise of Civilization". Primitivism.com (excerpt); Yale University Press. Retrieved 2009-07-08.
  5. ^ Brown, John Seely; Duguid, Paul (2001). "A Response to Bill Joy and the Doom-and-Gloom Technofuturists" (PDF). Science and Technology Policy Yearbook. American Association for the Advancement of Science.
  6. ^ McGinnis, John O. (Summer 2010). "Accelerating AI". Northwestern University Law Review. 104 (3): 1253–1270. Retrieved 16 July 2014.
  7. ^ More, Max (May 7, 2000). "Embrace, Don't Relinquish the Future". Extropy. Retrieved 22 July 2018.
  8. ^ Graves, Lucas (24 March 2008). "15th Anniversary: Why the Future Still Needs Us a While Longer". Wired. Wired.com. Retrieved 2009-10-22.

Further reading[edit]

  • Messerly, John G. "I'm glad the future doesn't need us: a critique of Joy's pessimistic futurism." ACM SIGCAS Computers and Society, Volume 33,Issue 2, (June 2003) ISSN 0095-2737

External links[edit]