Jump to content

Émile P. Torres: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Wording - Hughes and Sennesh claims are not similar to previous critics
No edit summary
Line 49: Line 49:
Torres later left the longtermist, transhumanist, and effective altruist communities, and became a vocal critic in 2019.<ref name=":2">{{Cite web |last=Ahuja |first=Anjana |author-link=Anjana Ahuja |date=May 10, 2023 |title=We need to examine the beliefs of today's tech luminaries |url=https://www.ft.com/content/edc30352-05fb-4fd8-a503-20b50ce014ab |url-status=live |archive-url=https://web.archive.org/web/20240112125612/https://www.ft.com/content/edc30352-05fb-4fd8-a503-20b50ce014ab |archive-date=January 12, 2024 |access-date=April 3, 2024 |website=[[Financial Times]]}}</ref><ref name=":4">{{Cite news |date=August 27, 2023 |title=The fight over a 'dangerous' ideology shaping AI debate |url=https://www.france24.com/en/live-news/20230827-the-fight-over-a-dangerous-ideology-shaping-ai-debate |access-date=April 3, 2024 |issn=0013-0389 |agency=[[Agence France-Presse]] |url-status=live |archive-date=August 27, 2023 |archive-url=https://web.archive.org/web/20230827024154/https://www.france24.com/en/live-news/20230827-the-fight-over-a-dangerous-ideology-shaping-ai-debate }}</ref> Torres claims that longtermism and related ideologies stem from [[eugenics]], and could be used to justify dangerous [[Consequentialism|consequentialist]] thinking.<ref name=":4" /> Along with Timnit Gebru, Torres coined the acronym "[[TESCREAL]]" to refer to what they see as a group of related philosophies: transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism, and longtermism.<ref name=":5">{{Cite interview |last=Torres |first=Émile P. |interviewer=[[Nathan J. Robinson]] |title=Why Effective Altruism and 'Longtermism' Are Toxic Ideologies |url=https://www.currentaffairs.org/2023/05/why-effective-altruism-and-longtermism-are-toxic-ideologies |work=[[Current Affairs (magazine)|Current Affairs]] |date=May 7, 2023 |access-date=April 3, 2024 |archive-date=March 1, 2024 |archive-url=https://web.archive.org/web/20240301093607/https://www.currentaffairs.org/2023/05/why-effective-altruism-and-longtermism-are-toxic-ideologies |url-status=live }}</ref> They first publicized the term in a paper on [[artificial general intelligence]] (AGI). Torres argued that a race towards developing AGI would instead produce systems that harm marginalized groups and concentrate power.<ref name=":2" />
Torres later left the longtermist, transhumanist, and effective altruist communities, and became a vocal critic in 2019.<ref name=":2">{{Cite web |last=Ahuja |first=Anjana |author-link=Anjana Ahuja |date=May 10, 2023 |title=We need to examine the beliefs of today's tech luminaries |url=https://www.ft.com/content/edc30352-05fb-4fd8-a503-20b50ce014ab |url-status=live |archive-url=https://web.archive.org/web/20240112125612/https://www.ft.com/content/edc30352-05fb-4fd8-a503-20b50ce014ab |archive-date=January 12, 2024 |access-date=April 3, 2024 |website=[[Financial Times]]}}</ref><ref name=":4">{{Cite news |date=August 27, 2023 |title=The fight over a 'dangerous' ideology shaping AI debate |url=https://www.france24.com/en/live-news/20230827-the-fight-over-a-dangerous-ideology-shaping-ai-debate |access-date=April 3, 2024 |issn=0013-0389 |agency=[[Agence France-Presse]] |url-status=live |archive-date=August 27, 2023 |archive-url=https://web.archive.org/web/20230827024154/https://www.france24.com/en/live-news/20230827-the-fight-over-a-dangerous-ideology-shaping-ai-debate }}</ref> Torres claims that longtermism and related ideologies stem from [[eugenics]], and could be used to justify dangerous [[Consequentialism|consequentialist]] thinking.<ref name=":4" /> Along with Timnit Gebru, Torres coined the acronym "[[TESCREAL]]" to refer to what they see as a group of related philosophies: transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism, and longtermism.<ref name=":5">{{Cite interview |last=Torres |first=Émile P. |interviewer=[[Nathan J. Robinson]] |title=Why Effective Altruism and 'Longtermism' Are Toxic Ideologies |url=https://www.currentaffairs.org/2023/05/why-effective-altruism-and-longtermism-are-toxic-ideologies |work=[[Current Affairs (magazine)|Current Affairs]] |date=May 7, 2023 |access-date=April 3, 2024 |archive-date=March 1, 2024 |archive-url=https://web.archive.org/web/20240301093607/https://www.currentaffairs.org/2023/05/why-effective-altruism-and-longtermism-are-toxic-ideologies |url-status=live }}</ref> They first publicized the term in a paper on [[artificial general intelligence]] (AGI). Torres argued that a race towards developing AGI would instead produce systems that harm marginalized groups and concentrate power.<ref name=":2" />


Torres continued to write extensively about the philosophies, and about how they intersect with respect to artificial intelligence.<ref>{{Cite news |last=Davies |first=Paul J. |date=December 30, 2023 |title=Apocalypse Now? Only In Our Fevered Dreams |url=https://www.bloomberg.com/opinion/articles/2023-12-30/ai-apocalypse-now-only-in-our-fevered-dreams |access-date=April 3, 2024 |work=[[Bloomberg.com|Bloomberg]] |language=en |archive-date=December 30, 2023 |archive-url=https://web.archive.org/web/20231230132020/https://www.bloomberg.com/opinion/articles/2023-12-30/ai-apocalypse-now-only-in-our-fevered-dreams |url-status=live }}</ref> They have criticized adherents of those philosophies for treating AGI as a technological solution to issues like [[climate change]] and access to education, while ignoring other political, social, or economic factors.<ref>{{Cite news |last=Piquard |first=Alexandre |date=June 20, 2023 |title=Behind AI, the return of technological utopias |url=https://www.lemonde.fr/en/united-states/article/2023/06/20/behind-ai-the-return-of-technological-utopias_6034482_133.html |url-access=subscription |access-date=April 3, 2024 |work=[[Le Monde]] |language=en |archive-date=January 12, 2024 |archive-url=https://web.archive.org/web/20240112111713/https://www.lemonde.fr/en/united-states/article/2023/06/20/behind-ai-the-return-of-technological-utopias_6034482_133.html |url-status=live }}</ref> They have also expressed concern over their belief that longtermism is prominent in the tech industry.<ref>{{Cite interview |last=Torres |first=Émile P. |interviewer=Esther Menhard |title='An odd and peculiar ideology' |url=https://netzpolitik.org/2023/longtermism-an-odd-and-peculiar-ideology/ |publisher=[[Netzpolitik.org]] |date=April 30, 2023 |access-date=April 4, 2024 |archive-date=March 4, 2024 |archive-url=https://web.archive.org/web/20240304230710/https://netzpolitik.org/2023/longtermism-an-odd-and-peculiar-ideology/ |url-status=live |quote=netzpolitik.org: You have spoken out against longtermism. What is wrong with it in your view? Torres: First I would have to underline the extent to which this view is influential in the world. Elon Musk calls it a „close match for my philosophy“. It’s really pervasive in the tech industry.}}</ref> Torres has also been described as a critic of techno-optimism.<ref>{{Cite news |last=Ongweso |first=Edward |date=July 13, 2023 |title=Silicon Valley's Quest to Build God and Control Humanity |url=https://www.thenation.com/?post_type=article&p=451204 |access-date=April 3, 2024 |work=[[The Nation]] |language=en-US |issn=0027-8378 |archive-date=April 4, 2024 |archive-url=https://web.archive.org/web/20240404032613/https://www.thenation.com/article/economy/silicon-valley-artificial-intelligence/ |url-status=live }}</ref> Ozy Brennan, writing in ''Asterisk'' magazine, criticized Torres's approach of grouping different philosophies as if they were a "monolithic" movement. They argue Torres has misunderstood these different philosophies, and has taken [[Thought experiment|philosophical thought experiments]] out of context.<ref>{{Cite web |last=Brennan |first=Ozy |title=The "TESCREAL" Bungle—Asterisk |url=https://asteriskmag.com/issues/06/the-tescreal-bungle |access-date=2024-06-18 |website=asteriskmag.com}}</ref> James Hughes and Eli Sennesh claimed Torres's work has a "conspiracy style of argumentation" and is "a bad intellectual history and bad politics."<ref>{{Cite web |last=Hughes |first=James |title=Conspiracy Theories, Left Futurism, and the Attack on TESCREAL |url=https://medium.com/institute-for-ethics-and-emerging-technologies/conspiracy-theories-left-futurism-and-the-attack-on-tescreal-456972fe02aa}}</ref>
Torres continued to write extensively about the philosophies, and about how they intersect with respect to artificial intelligence.<ref>{{Cite news |last=Davies |first=Paul J. |date=December 30, 2023 |title=Apocalypse Now? Only In Our Fevered Dreams |url=https://www.bloomberg.com/opinion/articles/2023-12-30/ai-apocalypse-now-only-in-our-fevered-dreams |access-date=April 3, 2024 |work=[[Bloomberg.com|Bloomberg]] |language=en |archive-date=December 30, 2023 |archive-url=https://web.archive.org/web/20231230132020/https://www.bloomberg.com/opinion/articles/2023-12-30/ai-apocalypse-now-only-in-our-fevered-dreams |url-status=live }}</ref> They have criticized adherents of those philosophies for treating AGI as a technological solution to issues like [[climate change]] and access to education, while ignoring other political, social, or economic factors.<ref>{{Cite news |last=Piquard |first=Alexandre |date=June 20, 2023 |title=Behind AI, the return of technological utopias |url=https://www.lemonde.fr/en/united-states/article/2023/06/20/behind-ai-the-return-of-technological-utopias_6034482_133.html |url-access=subscription |access-date=April 3, 2024 |work=[[Le Monde]] |language=en |archive-date=January 12, 2024 |archive-url=https://web.archive.org/web/20240112111713/https://www.lemonde.fr/en/united-states/article/2023/06/20/behind-ai-the-return-of-technological-utopias_6034482_133.html |url-status=live }}</ref> They have also expressed concern over their belief that longtermism is prominent in the tech industry.<ref>{{Cite interview |last=Torres |first=Émile P. |interviewer=Esther Menhard |title='An odd and peculiar ideology' |url=https://netzpolitik.org/2023/longtermism-an-odd-and-peculiar-ideology/ |publisher=[[Netzpolitik.org]] |date=April 30, 2023 |access-date=April 4, 2024 |archive-date=March 4, 2024 |archive-url=https://web.archive.org/web/20240304230710/https://netzpolitik.org/2023/longtermism-an-odd-and-peculiar-ideology/ |url-status=live |quote=netzpolitik.org: You have spoken out against longtermism. What is wrong with it in your view? Torres: First I would have to underline the extent to which this view is influential in the world. Elon Musk calls it a „close match for my philosophy“. It’s really pervasive in the tech industry.}}</ref> Torres has also been described as a critic of techno-optimism.<ref>{{Cite news |last=Ongweso |first=Edward |date=July 13, 2023 |title=Silicon Valley's Quest to Build God and Control Humanity |url=https://www.thenation.com/?post_type=article&p=451204 |access-date=April 3, 2024 |work=[[The Nation]] |language=en-US |issn=0027-8378 |archive-date=April 4, 2024 |archive-url=https://web.archive.org/web/20240404032613/https://www.thenation.com/article/economy/silicon-valley-artificial-intelligence/ |url-status=live }}</ref> Ozy Brennan, writing in ''Asterisk'' magazine, criticized Torres's approach of grouping different philosophies as if they were a "monolithic" movement. They argue Torres has misunderstood these different philosophies, and has taken [[Thought experiment|philosophical thought experiments]] out of context.<ref>{{Cite web |last=Brennan |first=Ozy |title=The "TESCREAL" Bungle—Asterisk |url=https://asteriskmag.com/issues/06/the-tescreal-bungle |access-date=2024-06-18 |website=asteriskmag.com}}</ref> James Hughes and Eli Sennesh have claimed that Torres's work has a "conspiracy style of argumentation" and is "a bad intellectual history and bad politics."<ref>{{Cite web |last=Hughes |first=James |title=Conspiracy Theories, Left Futurism, and the Attack on TESCREAL |url=https://medium.com/institute-for-ethics-and-emerging-technologies/conspiracy-theories-left-futurism-and-the-attack-on-tescreal-456972fe02aa}}</ref>


Torres has also written about artificial intelligence, and has advocated for more focus on AI harms including [[Intellectual property infringement|intellectual property theft]], [[algorithmic bias]], and concentration of wealth in technology corporations.<ref name=":4" /> They have also argued that it is noteworthy that climate change is not one of the primary concerns of longtermists. According to Torres, although longtermists acknowledge that climate change could cause millions of deaths, they don't view it as an existential threat because they believe that those with access to significant resources will be able to survive.<ref>{{Cite journal |last=Ackermann |first=Rebecca |date=November 2022 |title=Effective altruism and its growing influence |journal=[[MIT Technology Review]] |volume=125 |issue=6 |pages=42–51 |id={{ProQuest|2731823629}}}}</ref><ref>{{Cite journal |last=Davidson |first=Joe P.L. |date=2023 |title=Extinctiopolitics: Existential Risk Studies, The Extinctiopolitical Unconscious, And The Billionaires' Exodus from Earth |url=http://dx.doi.org/10.3898/newf:107-8.03.2022 |journal=[[New Formations]] |volume=107 |issue=107 |pages=48–65 |doi=10.3898/newf:107-8.03.2022 |issn=0950-2378 |access-date=2024-04-04 |archive-date=2024-04-04 |archive-url=https://web.archive.org/web/20240404032453/https://www.ingentaconnect.com/content/lwish/nf/2023/00000107/f0020107/art00004;jsessionid=qd6k660vclpw.x-ic-live-02 |url-status=live }}</ref> Although effective altruism and a newer philosophy known as [[effective accelerationism]] have been described as opposing sides of the argument on how to approach developing artificial intelligence, Torres has opined that the two groups are in fact very similar, and characterized the conflict as a "family dispute". "What's missing is all of the questions that [[Ethics of artificial intelligence|AI ethicists]] are asking about algorithmic bias, discrimination, the environmental impact of [AI systems], and so on," Torres told ''[[The Independent]].''<ref>{{Cite web |last=Dodds |first=Io |date=February 7, 2024 |title=Inside the political split between AI designers that could decide our future |url=https://www.independent.co.uk/tech/openai-sam-altman-effective-accelerationism-b2492430.html |access-date=April 4, 2024 |website=[[The Independent]] |language=en |archive-date=March 5, 2024 |archive-url=https://web.archive.org/web/20240305180505/https://www.independent.co.uk/tech/openai-sam-altman-effective-accelerationism-b2492430.html |url-status=live }}</ref>
Torres has also written about artificial intelligence, and has advocated for more focus on AI harms including [[Intellectual property infringement|intellectual property theft]], [[algorithmic bias]], and concentration of wealth in technology corporations.<ref name=":4" /> They have also argued that it is noteworthy that climate change is not one of the primary concerns of longtermists. According to Torres, although longtermists acknowledge that climate change could cause millions of deaths, they don't view it as an existential threat because they believe that those with access to significant resources will be able to survive.<ref>{{Cite journal |last=Ackermann |first=Rebecca |date=November 2022 |title=Effective altruism and its growing influence |journal=[[MIT Technology Review]] |volume=125 |issue=6 |pages=42–51 |id={{ProQuest|2731823629}}}}</ref><ref>{{Cite journal |last=Davidson |first=Joe P.L. |date=2023 |title=Extinctiopolitics: Existential Risk Studies, The Extinctiopolitical Unconscious, And The Billionaires' Exodus from Earth |url=http://dx.doi.org/10.3898/newf:107-8.03.2022 |journal=[[New Formations]] |volume=107 |issue=107 |pages=48–65 |doi=10.3898/newf:107-8.03.2022 |issn=0950-2378 |access-date=2024-04-04 |archive-date=2024-04-04 |archive-url=https://web.archive.org/web/20240404032453/https://www.ingentaconnect.com/content/lwish/nf/2023/00000107/f0020107/art00004;jsessionid=qd6k660vclpw.x-ic-live-02 |url-status=live }}</ref> Although effective altruism and a newer philosophy known as [[effective accelerationism]] have been described as opposing sides of the argument on how to approach developing artificial intelligence, Torres has opined that the two groups are in fact very similar, and characterized the conflict as a "family dispute". "What's missing is all of the questions that [[Ethics of artificial intelligence|AI ethicists]] are asking about algorithmic bias, discrimination, the environmental impact of [AI systems], and so on," Torres told ''[[The Independent]].''<ref>{{Cite web |last=Dodds |first=Io |date=February 7, 2024 |title=Inside the political split between AI designers that could decide our future |url=https://www.independent.co.uk/tech/openai-sam-altman-effective-accelerationism-b2492430.html |access-date=April 4, 2024 |website=[[The Independent]] |language=en |archive-date=March 5, 2024 |archive-url=https://web.archive.org/web/20240305180505/https://www.independent.co.uk/tech/openai-sam-altman-effective-accelerationism-b2492430.html |url-status=live }}</ref>

Revision as of 19:54, 2 August 2024

Émile P. Torres
Émile P. Torres
Academic background
Education
Thesis"Human Extinction: A History of the Science and Ethics of Annihilation"
Academic work
InstitutionsCase Western Reserve University
Main interestsEschatology, existential risk, and human extinction

Émile P. Torres (formerly known as Phil Torres) is an American philosopher, intellectual historian, author, and postdoctoral researcher at Case Western Reserve University. Their research focuses on eschatology, existential risk, and human extinction. Along with computer scientist Timnit Gebru, Torres coined the acronym "TESCREAL" to criticize what they see as a group of related philosophies: transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism, and longtermism.

Early life and education

Torres grew up in Maryland. They were raised in a fundamentalist evangelical Christian family, but later left the religion and became an atheist.[1][2] They attribute their interest in eschatology to their fundamentalist upbringing, which exposed them to substantial discussion of the Rapture.[3]

Torres attended the University of Maryland, College Park and earned a Bachelor of Science with honors in philosophy in 2007. In 2009, they earned a Master of Science in neuroscience from Brandeis University. Simultaneously, from 2008–2009, they were a special student at Harvard University in the philosophy department.[3] In 2020, Torres began a philosophy Ph.D. program at the Leibniz University Hannover.[4]

Career

Much of Torres's work focuses on existential risk, the study of potential catastrophic events that could result in human extinction. They have also described a focus of their work as "existential ethics", which they define as "questions about whether our extinction would be right or wrong to bring about if it happened".[5] They also study the history of human ideas, and have researched the histories of some contemporary philosophical movements.[6][7]

In 2016, Torres published a book titled The End: What Science and Religion Tell Us About the Apocalypse, which discusses both religious and secular eschatology, and describes threats from technologies such as nuclear weapons, biological engineering, nanotechnology, and artificial intelligence.[3] In 2017 they published another book, titled Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks. Like their first book, it discusses a range of existential threats, but also delves into what they term "agential risk": the roles of outside agents in existential risk. Morality, Foresight, and Human Flourishing was positively reviewed in Futures as a "current and timely" introduction to existential risk.[8]

The Guardian reported in 2023 that there were "accounts of Torres harassing the philosopher Peter Boghossian and the British cultural theorist Helen Pluckrose." In the same article, Torres disputed these accounts as being part of a coordinated campaign to undermine Torres's critiques of "radical far-right views".[1]: 1

In 2023, Torres became a postdoctoral researcher at Case Western Reserve University's Inamori International Center for Ethics and Excellence.[9] Also in 2023, Routledge published Torres's Human Extinction: A History of the Science and Ethics of Annihilation.[1] The book posits that the rise of Christianity, along with Christianity's focus on salvation, removed the topic of human extinction from public discourse.[1]: 1They argue that concerns around human extinction have re-emerged amid increasing secularism.[1][10] While Torres does not in practice "wish to see or promote" human extinction, they contend that it would not be inherently bad if it were to occur without violence, such as with declining birthrates.[11]

Torres has published articles in popular media including The Washington Post and Current Affairs,[12][13] and is a contributing writer to Salon and Truthdig.[14][15]

Torres runs a reading group devoted to the "The Ethics of Human Extinction."[16]

Transhumanism, longtermism, and effective altruism

Early in their career, Torres identified as a transhumanist, longtermist, and effective altruist.[1][17] Before 2017, Torres contributed writing to the Future of Life Institute, a non-profit organization focused on technology and existential risk. After turning against the organization and opposing techno-optimism with ideas such as a need for a moratorium on the development of artificial intelligence, Torres says they were ousted and their writing removed from the website.[18]

Torres later left the longtermist, transhumanist, and effective altruist communities, and became a vocal critic in 2019.[4][19] Torres claims that longtermism and related ideologies stem from eugenics, and could be used to justify dangerous consequentialist thinking.[19] Along with Timnit Gebru, Torres coined the acronym "TESCREAL" to refer to what they see as a group of related philosophies: transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism, and longtermism.[6] They first publicized the term in a paper on artificial general intelligence (AGI). Torres argued that a race towards developing AGI would instead produce systems that harm marginalized groups and concentrate power.[4]

Torres continued to write extensively about the philosophies, and about how they intersect with respect to artificial intelligence.[20] They have criticized adherents of those philosophies for treating AGI as a technological solution to issues like climate change and access to education, while ignoring other political, social, or economic factors.[21] They have also expressed concern over their belief that longtermism is prominent in the tech industry.[22] Torres has also been described as a critic of techno-optimism.[23] Ozy Brennan, writing in Asterisk magazine, criticized Torres's approach of grouping different philosophies as if they were a "monolithic" movement. They argue Torres has misunderstood these different philosophies, and has taken philosophical thought experiments out of context.[24] James Hughes and Eli Sennesh have claimed that Torres's work has a "conspiracy style of argumentation" and is "a bad intellectual history and bad politics."[25]

Torres has also written about artificial intelligence, and has advocated for more focus on AI harms including intellectual property theft, algorithmic bias, and concentration of wealth in technology corporations.[19] They have also argued that it is noteworthy that climate change is not one of the primary concerns of longtermists. According to Torres, although longtermists acknowledge that climate change could cause millions of deaths, they don't view it as an existential threat because they believe that those with access to significant resources will be able to survive.[26][27] Although effective altruism and a newer philosophy known as effective accelerationism have been described as opposing sides of the argument on how to approach developing artificial intelligence, Torres has opined that the two groups are in fact very similar, and characterized the conflict as a "family dispute". "What's missing is all of the questions that AI ethicists are asking about algorithmic bias, discrimination, the environmental impact of [AI systems], and so on," Torres told The Independent.[28]

Andrew Anthony, writing in The Observer, has described Torres as longtermism's "most vehement critic".[1]

Personal life

Torres is non-binary and uses they/them pronouns.[1]

Selected publications

Books

  • The End: What Science and Religion Tell Us about the Apocalypse. Pitchstone Publishing. February 16, 2016. ISBN 978-1634310406.
  • Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks. Pitchstone Publishing. October 2017. ISBN 978-1634311427.
  • Human Extinction: A History of the Science and Ethics of Annihilation. Routledge. July 14, 2023. ISBN 978-1032159065.

Papers

References

  1. ^ a b c d e f g h Anthony, Andrew (July 22, 2023). "'What if everybody decided not to have children?' The philosopher questioning humanity's future". The Observer. ISSN 0029-7712. Archived from the original on January 19, 2024. Retrieved April 3, 2024.
  2. ^ Hamburger, Jacob (January 14, 2019). "What Was New Atheism?". The Point. Archived from the original on November 13, 2023. Retrieved April 3, 2024.
  3. ^ a b c Howe, Brian (March 9, 2016). "Apocalypse How? Carrboro's Phil Torres on Nanobots, Biotech, A.I., and Other Onrushing Threats to Our Species". Indy Week. Archived from the original on December 11, 2023. Retrieved April 3, 2024.
  4. ^ a b c Ahuja, Anjana (May 10, 2023). "We need to examine the beliefs of today's tech luminaries". Financial Times. Archived from the original on January 12, 2024. Retrieved April 3, 2024.
  5. ^ Caddy, Becca (April 21, 2023). "Is There a Best Way to Think About the Future of Earth?". Inverse. Archived from the original on July 21, 2023. Retrieved April 3, 2024.
  6. ^ a b Torres, Émile P. (May 7, 2023). "Why Effective Altruism and 'Longtermism' Are Toxic Ideologies". Current Affairs (Interview). Interviewed by Nathan J. Robinson. Archived from the original on March 1, 2024. Retrieved April 3, 2024.
  7. ^ Dodds, Io (April 19, 2023). "Meet the 'elite' couples breeding to save mankind". The Telegraph. Archived from the original on March 30, 2024. Retrieved April 3, 2024.
  8. ^ Umbrello, Steven (April 2018). "Book review: Phil Torres's Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks". Futures. 98: 90–91. doi:10.1016/j.futures.2018.02.007. hdl:2318/1685522. Archived from the original on 2023-12-03. Retrieved 2024-04-04 – via Elsevier Science Direct.
  9. ^ Smith, Brianna (January 23, 2024). "Inamori International Center for Ethics and Excellence's Émile Torres weighs in on superhuman AI panic". The Daily. Case Western Reserve University. Archived from the original on April 4, 2024. Retrieved April 3, 2024.
  10. ^ Wright, Mic (March 31, 2024). "Apocalypse, how?". Perspective Media. Archived from the original on April 4, 2024. Retrieved April 3, 2024.
  11. ^ Anthony, Andrew (2023-07-22). "'What if everybody decided not to have children?' The philosopher questioning humanity's future". The Observer. ISSN 0029-7712. Retrieved 2024-07-20.
  12. ^ Torres, Émile P. (August 31, 2022). "How AI could accidentally extinguish mankind". The Washington Post. Archived from the original on May 8, 2023. Retrieved April 3, 2024.
  13. ^ "Émile P. Torres". Current Affairs. Archived from the original on December 2, 2023. Retrieved April 3, 2024.
  14. ^ "Émile P. Torres's Articles at Salon.com". Salon. Archived from the original on April 1, 2024. Retrieved April 3, 2024.
  15. ^ "Émile P. Torres". Truthdig. Archived from the original on November 19, 2023. Retrieved April 3, 2024.
  16. ^ "Extinction Reading Group". My Site 6. Retrieved 2024-07-29.
  17. ^ Kinstler, Linda (November 15, 2022). "The good delusion: has effective altruism broken bad?". The Economist. ISSN 0013-0613. Archived from the original on November 24, 2023. Retrieved April 3, 2024.
  18. ^ Volpicelli, Gian (November 24, 2022). "Stop the killer robots! Musk-backed lobbyists fight to save Europe from bad AI". Politico. Archived from the original on March 26, 2024. Retrieved April 3, 2024.
  19. ^ a b c "The fight over a 'dangerous' ideology shaping AI debate". Agence France-Presse. August 27, 2023. ISSN 0013-0389. Archived from the original on August 27, 2023. Retrieved April 3, 2024.
  20. ^ Davies, Paul J. (December 30, 2023). "Apocalypse Now? Only In Our Fevered Dreams". Bloomberg. Archived from the original on December 30, 2023. Retrieved April 3, 2024.
  21. ^ Piquard, Alexandre (June 20, 2023). "Behind AI, the return of technological utopias". Le Monde. Archived from the original on January 12, 2024. Retrieved April 3, 2024.
  22. ^ Torres, Émile P. (April 30, 2023). "'An odd and peculiar ideology'" (Interview). Interviewed by Esther Menhard. Netzpolitik.org. Archived from the original on March 4, 2024. Retrieved April 4, 2024. netzpolitik.org: You have spoken out against longtermism. What is wrong with it in your view? Torres: First I would have to underline the extent to which this view is influential in the world. Elon Musk calls it a „close match for my philosophy". It's really pervasive in the tech industry.
  23. ^ Ongweso, Edward (July 13, 2023). "Silicon Valley's Quest to Build God and Control Humanity". The Nation. ISSN 0027-8378. Archived from the original on April 4, 2024. Retrieved April 3, 2024.
  24. ^ Brennan, Ozy. "The "TESCREAL" Bungle—Asterisk". asteriskmag.com. Retrieved 2024-06-18.
  25. ^ Hughes, James. "Conspiracy Theories, Left Futurism, and the Attack on TESCREAL".
  26. ^ Ackermann, Rebecca (November 2022). "Effective altruism and its growing influence". MIT Technology Review. 125 (6): 42–51. ProQuest 2731823629.
  27. ^ Davidson, Joe P.L. (2023). "Extinctiopolitics: Existential Risk Studies, The Extinctiopolitical Unconscious, And The Billionaires' Exodus from Earth". New Formations. 107 (107): 48–65. doi:10.3898/newf:107-8.03.2022. ISSN 0950-2378. Archived from the original on 2024-04-04. Retrieved 2024-04-04.
  28. ^ Dodds, Io (February 7, 2024). "Inside the political split between AI designers that could decide our future". The Independent. Archived from the original on March 5, 2024. Retrieved April 4, 2024.