Jump to content

User:Ego.Eudaimonia/sandbox

From Wikipedia, the free encyclopedia

Summary[edit]

In a nutshell, the book makes a case for longtermism—an ethical stance which gives priority to improving the long-term future—and proposes that we can make the future better in two ways: "by averting permanent catastrophes, thereby ensuring civilisation’s survival; or by changing civilisation’s trajectory to make it better while it lasts...Broadly, ensuring survival increases the quantity of future life; trajectory changes increase its quality".[1]: 35–36  The book has five parts.

Part 1: Longtermism[edit]

Part 1 introduces and advocates for longtermism, which MacAskill defines as "the idea that positively influencing the long-term future is a key moral priority of our time."[1]: 4  This part of the book also describes how we, the current generation, can shape the future through our actions.[1]: 29 

MacAskill's argument for longtermism has three parts. First, future people count morally as much as the people alive today, which he supports by drawing the analogy that "distance in time is like distance in space. People matter even if they live thousands of miles away. Likewise, they matter even if they live thousands of years hence".[1]: 10 

Second, the future is immensely big since humanity may survive for a very long time, and there may be many more people alive at any given time. MacAskill points out that the "future of civilisation could...be extremely long. The earth will remain habitable for hundreds of millions of years...And if humanity ultimately takes to the stars, the timescales become literally astronomical".[1]: 14 

Third, the future could be very good or very bad, and our actions may affect what it will be. It could be very good if technological and moral progress continue to improve the quality of life into the future, just as they have greatly improved our lives compared to our ancestors; yet, the future could also be very bad if technology were to allow a totalitarian regime to control the world or a world war to completely destroy civilisation.[1]: 19–21  MacAskill notes that our present time is highly unusual in that "we live in an era that involves an extraordinary amount of change"[1]: 26 —both relative to the past (where rates of economic and technological progress were very slow) and to the future (since current growth rates cannot continue for long before hitting physical limits).[1]: 26–28  From this he concludes that we live at a pivotal moment in human history, where "the world’s long-run fate depends in part on the choices we make in our lifetimes"[1]: 6  since "society has not yet settled down into a stable state, and we are able to influence which stable state we end up in".[1]: 28 

Part 1 ends with a chapter on how individuals can shape the course of history. MacAskill introduces a three-part framework for thinking about the future, which states that the long-term value of an outcome we may bring about depends on its significance, persistence, and contingency.[1]: 31–33  He explains that significance "is the average value added by bringing about a certain state of affairs", persistence means "how long that state of affairs lasts, once it has been brought about", and contingency "refers to the extent to which the state of affairs depends on an individual’s action".[1]: 32 

Part 2: Trajectory changes[edit]

Part 2 investigates how moral change and value lock-in may constitute trajectory changes, affecting the long-run value of future civilisation. MacAskill argues that "we are living through a period of plasticity, that the moral views that shape society are like molten glass that can be blown into many different shapes. But the glass is cooling, and at some point, perhaps in the not-too-distant future, it might set".[1]: 102 

MacAskill suggests that moral and cultural values are malleable, contingent, and potentially long-lived—if history were to be rerun, the dominant global values may be very different from those in our world. For example, he argues that the abolition of slavery may not have been morally or economically inevitable.[1]: 70  Abolition may thus have been a turning point in the entirety of human history, supporting the idea that improving society's values may positively influence the long-run future.

MacAskill warns of a potential value lock-in, "an event that causes a single value system, or set of value systems, to persist for an extremely long time".[1]: 78  He notes that if "value lock-in occurred globally, then how well or poorly the future goes would be determined in significant part by the nature of those locked-in values".[1]: 78  Various past rulers sought to lock in their values—some with more success, like the Han dynasty in ancient China entrenching Confucianism for over a millenium[1]: 78 , and some with less success, like Hitler's proclaimed "Thousand-Year Reich"[1]: 92 . MacAskill states that the "key issue is which values will guide the future. Those values could be narrow-minded, parochial, and unreflective. Or they could be open-minded, ecumenical, and morally exploratory".[1]: 88 

Value lock-in result from certain technological advances, according to MacAskill. In particular, he argues that the development of artificial general intelligence (AGI)—an AI system "capable of learning as wide an array of tasks as human beings can and performing them to at least the same level as human beings"[1]: 80 —could result in the permanent lock-in of the values of those who control or have programmed the AGI.[1]: 80–86  This may occur because AGI systems may be both enormously powerful and potentially immortal since they "could replicate themselves as many times as they wanted, just as easily as we can replicate software today".[1]: 86  MacAskill concludes that "if this happened, then the ruling ideology could in principle persist as long as civilisation does. And there would no longer be competing value systems that could dislodge the status quo".[1]: 86 

Part 3: Safeguarding civilisation[edit]

Part 3 explores how to protect humanity from risks of extinction, unrecoverable civilisational collapse, and long-run technological stagnation.

MacAskill discusses several risks of human extinction, focusing on engineered pathogens, misaligned artificial general intelligence (AGI), and great power war. He points to the rapid progress in biotechnology and states that "engineered pathogens could be much more destructive than natural pathogens because they can be modified to have dangerous new properties", such as a pathogen "with the lethality of Ebola and the contagiousness of measles".[1]: 108  MacAskill points to other scholars who "put the probability of an extinction-level engineered pandemic this century at around 1 percent" and references his colleague Toby Ord, who estimates the probability at 3 percent in his 2020 book The Precipice: Existential Risk and the Future of Humanity.[1]: 113  Ensuring humanity's survival by reducing extinction risks may significantly improve the long-term future by increasing the number of flourishing future lives.[1]: 35–36 

The next chapter discusses the risk of civilisational collapse, referring to events "in which society loses the ability to create most industrial and postindustrial technology".[1]: 124  He discusses several potential causes of civilisational collapse—including extreme climate change, fossil fuel depletion, and nuclear winter caused by nuclear war—concluding that civilisation appears very resilient, with recovery after a collapse being likely.[1]: 127–142  Yet, he believes that the "lingering uncertainty is more than enough to make the risk of unrecovered collapse a key longtermist priority".[1]: 142 

MacAskill next considers the risk of long-lasting technological and economic stagnation. While he considers indefinite stagnation unlikely, "it seems entirely plausible that we could stagnate for hundreds or thousands of years".[1]: 144  This matters for longtermism for two reasons: first, "if society stagnates technologically, it could remain stuck in a period of high catastrophic risk for such a long time that extinction or collapse would be all but inevitable".[1]: 142  Second, the society emerging after the period of stagnation may be guided by worse values than society today.[1]: 144 

Part 4: Assessing the end of the world[edit]

Part 4 discusses how bad the end of humanity would be, which depends on whether it is morally good for happy people to be born and whether the future will be good or bad. The answers to these questions, according to MacAskill, "determine whether we should focus on trajectory changes or on ensuring survival, or on both".[1]: 163 

Whether making happy people improves the world is a key question in population ethics, which concerns "the evaluation of actions that might change who is born, how many people are born, and what their quality of life will be".[1]: 168  Answering this question determines whether we should "care about the loss of those future people who will never be born if humanity goes extinct in the next few centuries".[1]: 188  After discussing several population ethical theories—including the total view, the average view, critical-level theories, and person-affecting views—MacAskill concludes that "it is a loss if future people are prevented from coming into existence—as long as their lives would be good enough. So the early extinction of the human race would be a truly enormous tragedy".[1]: 189 

On whether the future will be good or bad, MacAskill notes that the "more optimistic we are, the more important it is to avoid permanent collapse or extinction; the less optimistic we are, the stronger the case for focusing instead on improving values or other trajectory changes".[1]: 192  To answer the question, MacAskill compares how the quality of life of humans and nonhuman animals has changed over time and how both groups should be weighted numerically.[1]: 194–213  While arguing that the billions of animals suffering in factory farms likely have negative well-being—they would have been better off never having been born—MacAskill concludes optimistically that "we should expect the future to be positive on balance".[1]: 193  He justifies this optimism in several ways, most crucially by pointing to "an asymmetry in the motivation of future people—namely, people sometimes produce good things just because the things are good, but people rarely produce bad things just because they are bad".[1]: 218 

Part 5: Taking action[edit]

Part 5 details what readers can do to take action based on the book's arguments.

MacAskill emphasises the significance of professional work, writing that "by far the most important decision you will make, in terms of your lifetime impact, is your choice of career".[1]: 234  He points the reader to the nonprofit 80,000 Hours that he helped cofound, which conducts research and provides advice on which careers have the largest positive social impact, especially from a longtermist perspective.[2][3] One career opportunity he highlights is movement-building work—to "convince others to care about future generations...and to act to positively influence the long term".[1]: 243 

He makes a case that the common emphasis on personal behaviour and consumption, "though understandable, is a major strategic blunder for those of us who want to make the world better".[1]: 243 Instead, he argues that donations to effective causes and organisations are much more impactful than changing our personal consumption.[1]: 232  Beyond donations, he elaborates on three other impactful personal decisions, including political activism, spreading good ideas, and having children.[1]: 233 

MacAskill acknowledges the pervasive uncertainty, both moral and empirical, that surrounds longtermism and offers four lessons to help guide attempts to improve the long-term future: taking robustly good actions, building up options, learning more, and avoiding causing harm.[1]: 226,240 

References[edit]

  1. ^ a b c d e f g h i j k l m n o p q r s t u v w x y z aa ab ac ad ae af ag ah ai aj ak al am an ao ap aq ar as at MacAskill, William (2022). What We Owe the Future. New York: Basic Books. ISBN 978-1-5416-1862-6.
  2. ^ "80,000 Hours - About us: what do we do, and how can we help?". 80,000 Hours. Retrieved 2022-08-14.
  3. ^ Todd, Benjamin (2017). "Longtermism: the moral significance of future generations". 80,000 Hours. Retrieved 2022-08-14.