Jump to content

Population ethics

From Wikipedia, the free encyclopedia
(Redirected from Sadistic conclusion)

Population ethics is the philosophical study of the ethical problems arising when our actions affect who is born and how many people are born in the future. An important area within population ethics is population axiology, which is "the study of the conditions under which one state of affairs is better than another, when the states of affairs in question may differ over the numbers and the identities of the persons who ever live."[1]

Moral philosopher Derek Parfit brought population ethics to the attention of the academic community as a modern branch of moral philosophy in his seminal work Reasons and Persons in 1984.[2] Discussions of population ethics are thus a relatively recent development in the history of philosophy. Formulating a satisfactory theory of population ethics is regarded as "notoriously difficult".[3] While scholars have proposed and debated many different population ethical theories, no consensus in the academic community has emerged.

Gustaf Arrhenius, Professor of Philosophy and Director of the Institute for Futures Studies, comments on the history and challenges within population ethics that

For the last thirty years or so, there has been a search underway for a theory that can accommodate our intuitions in regard to moral duties to future generations. The object of this search has proved surprisingly elusive. ... The main problem has been to find an adequate population theory, that is, a theory about the moral value of states of affairs where the number of people, the quality of their lives, and their identities may vary. Since, arguably, any reasonable moral theory has to take these aspects of possible states of affairs into account when determining the normative status of actions, the study of population theory is of general import for moral theory.[4]

Positions

[edit]

All major theories in population ethics tend to produce counterintuitive results.[4] Hilary Greaves, Oxford Professor of Philosophy and director of the Global Priorities Institute, explains that this is no coincidence, as academics have proved a series of impossibility theorems for the field in recent decades. These impossibility theorems are formal results showing that "for various lists of prima facie intuitively compelling desiderata, ... no axiology can simultaneously satisfy all the desiderata on the list."[1] She concludes that choosing a theory in population ethics comes down to choosing which moral intuition one is least unwilling to give up.

Totalism

[edit]
"The point up to which, on Utilitarian principles, population ought to be encouraged to increase, is not that at which average happiness is the greatest possible...but that at which the product formed by multiplying the number of persons living into the amount of average happiness reaches its maximum." ~ Henry Sidgwick[5]

Total utilitarianism, or totalism, aims to maximize the total sum of wellbeing in the world, as constituted by the number of individuals multiplied by their average quality of life. Consequently, totalists hold that a state of affairs can be improved either by increasing the average wellbeing level of the existing population or by increasing the population size through the addition of individuals with positive wellbeing. Greaves formally defines totalism as follows: A state of affairs "A is better than B if total well-being in A is higher than total well-being in B. A and B are equally good iff total well-being in A is equal to total well-being in B."[1]

Totalism mathematically leads to an implication, which many people find counterintuitive. In his Reasons and Persons, Derek Parfit was among the first to spell out and popularize this implication in the academic literature, coining it the "repugnant conclusion".

The repugnant conclusion

[edit]

In Parfit's original formulation, the repugnant conclusion states that

[f]or any possible population of at least ten billion people, all with a very high quality of life, there must be some much larger imaginable population whose existence, if other things are equal, would be better even though its members have lives that are barely worth living.

— Derek Parfit, Reasons and Persons (1984), p. 342

Parfit arrives at this conclusion by showing that there is a series of steps, each of which intuitively makes the overall state of the world better, that leads from an "A" world—one with a large population with high average wellbeing—to a "Z" world—one with an extremely large population but just barely positive average wellbeing. Totalism leads to the repugnant conclusion because it holds that the Z world is better than the A world, as the total wellbeing is higher in the Z world for a sufficiently large population.[6]

Greaves writes that Parfit searched for a way to avoid the repugnant conclusion, but that he

failed to find any alternative axiology that he himself considered satisfactory, but [Parfit] held out hope that this was merely for want of searching hard enough: That, in the future, some fully satisfactory population axiology, called "Theory X" by way of placeholder, might be found. Much of the subsequent literature has consisted of attempts to formulate such a "Theory X."

— Hilary Greaves, Population Axiology (2017), Philosophy Compass, p. 12

The impossibility theorems in population ethics highlight the difficulty of avoiding the repugnant conclusion without giving up even more fundamental axioms in ethics and rationality. In light of this, several prominent academics have come to accept and even defend the repugnant conclusion, including philosophers Torbjörn Tannsjö[7] and Michael Huemer,[8] because this strategy avoids all the impossibility theorems.[1]

Averagism

[edit]

Average utilitarianism, or averagism, aims only to improve the average wellbeing level, without regard for the number of individuals in existence. Averagism avoids the repugnant conclusion, because it holds that, in contrast to totalism, reductions in the average wellbeing level can never be compensated for by adding more people to the population.[6] Greaves defines averagism formally as follows: A state of affairs "A is better than B iff average well-being in A is higher than average well-being in B. A and B are equally good iff average well-being in A is equal to average well-being in B."[1]

Averagism has never been widely embraced by philosophers, because it leads to counterintuitive implications said to be "at least as serious"[1] as the repugnant conclusion. In particular, Parfit shows that averagism leads to the conclusion that a population of just one person is better than any large population—say, the 7.7 billion people alive today—as long as the average wellbeing level of the single person is slightly higher than of the large group of people.[2]

More counterintuitively still, averagism also implies that "for a population consisting of just one person leading a life at a very negative level of well-being, e.g., a life of constant torture, there is another population which is better even though it contains millions of lives at just a slightly less negative level of well-being".[6]

Sadistic conclusion

[edit]

Along these lines, averagism entails its own counterintuitive implication, which Arrhenius called the "sadistic conclusion". He defines it as follows: "An addition of lives with negative welfare can be better than an addition of lives with positive welfare."[9] This follows from averagism since adding a small number of tortured people with horrible lives to a population diminishes the average wellbeing level by less, than would creating a sufficiently large number of people with positive lives, as long as their wellbeing is below average.

Person-affecting views

[edit]

Some people have the intuition that, all else being equal, adding a happy person to the population does not constitute an improvement to the overall state of the world. This intuition is captured by the person-affecting class of views in population ethics, and is often expressed in Jan Narveson's words that "we are in favour of making people happy, but neutral about making happy people".[10]

Person-affecting views can be seen as a revision of total utilitarianism in which the "scope of the aggregation" is changed from all individuals who would exist to a subset of those individuals (though the details of this vary).[11] They avoid the repugnant conclusion, because they deny that a loss of wellbeing in the present generation can be compensated by bringing additional people into existence that would enjoy a high wellbeing.

Person-affecting views can be characterized by the following two claims: first, the person-affecting restriction holds that doing something morally good or bad requires it to be good or bad for someone; and second, the incomparability of non-existence holds that existing and non-existing are incomparable, which implies that it cannot be good or bad for someone to come into existence.[11] Taken together, these claims entail what Greaves describes as the neutrality principle: "Adding an extra person to the world, if it is done in such a way as to leave the well-being levels of others unaffected, does not make a state of affairs either better or worse."[1]

However, person-affecting views generate many counterintuitive implications, leading Greaves to comment that "it turns out to be remarkably difficult to formulate any remotely acceptable axiology that captures this idea of neutrality".[1]

Asymmetric views towards suffering and happiness

[edit]

One of the most challenging problems population ethics faces, affecting in particular person-affecting views is that of the asymmetry between bringing into existence happy and unhappy (not worth living) lives.[12][13][14] Jeff McMahan describes the asymmetry by saying that

while the fact that a person's life would be worse than no life at all (or 'worth not living') constitutes a strong moral reason for not bringing him into existence, the fact that a person's life would be worth living provides no (or only a relatively weak) moral reason for bringing him into existence.[15]

One response to this challenge has been to reject this asymmetry and claim that just as we have reasons not to bring into existence a being who will have a bad life, we have reasons to bring into existence a being who will have a good life.[16] Critics of this view can claim either that our reasons not to bring into existence unhappy lives are stronger than our reasons to create happy lives, or that while we should avoid creating unhappy lives we have no reason to create happy lives. While this claim has been defended from different view points,[17][18][19] it is the one that would be favored especially by negative consequentialism and other suffering-focused views.[20][21]

Practical relevance

[edit]

Population ethical problems are particularly likely to arise when making large-scale policy-decisions, but they can also affect how we should evaluate certain choices made by individuals. Examples of practical questions that give rise to population ethical problems include the decision whether or not to have an additional child; how to allocate life-saving resources between young and old people; how many resources to dedicate to climate change mitigation; and whether or not to support family planning programs in the developing world. The decisions made about all of these cases affect the number, the identity and the average quality of life of future people.[1]

One's views regarding population ethics have the potential to significantly shape what one thinks of as the most pressing moral priorities.[22] For instance, the total view in population ethics and related theories, have been claimed to imply longtermism, defined by the Global Priorities Institute at the University of Oxford as "the view that the primary determinant of the differences in value of the actions we take today is the effect of those actions on the very long-term future".[23] On this basis, Oxford philosopher Nick Bostrom argues that the prevention of existential risks to humanity is an important global priority in order to preserve the value of the many lives that could come to exist in the future.[22][24] Others who have endorsed the asymmetry between bringing into existence happy and miserable lives have also supported a longtermist approach and focused on the prevention of risks of scenarios of future suffering, especially those where suffering would prevail over happiness or where there might be astronomical amounts of suffering.[25][26][27] Longtermist ideas have been taken up and are put into practice by several organizations associated with the effective altruism community, such as the Open Philanthropy Project and 80,000 Hours, as well as by philanthropists like Dustin Moskovitz.[28][29][30]

References

[edit]
  1. ^ a b c d e f g h i Greaves, Hilary (2017). "Population axiology". Philosophy Compass. 12 (11): e12442. doi:10.1111/phc3.12442.
  2. ^ a b Parfit, Derek (1984). Reasons and Persons. Oxford University Press. doi:10.1093/019824908X.001.0001. ISBN 9780198249085.
  3. ^ Teruji, Thomas (2017). "Some possibilities in population axiology". Mind. 127 (507): 807–832. doi:10.1093/mind/fzx047.
  4. ^ a b Gustaf Arrhenius (2000). Future generations: A challenge for moral theory (PhD). Uppsala University. Retrieved 2024-01-22.
  5. ^ Sidgwick, Henry (1907). The Methods of Ethics (7th ed.). United Kingdom. pp. Book 4, chapter 1, section 2.{{cite book}}: CS1 maint: location missing publisher (link)
  6. ^ a b c Arrhenius, Gustaf; Ryberg, Jesper; Tännsjö, Torbjörn (2017), "The Repugnant Conclusion", in Zalta, Edward N. (ed.), The Stanford Encyclopedia of Philosophy (Spring 2017 ed.), Metaphysics Research Lab, Stanford University, retrieved 2019-06-18
  7. ^ Tännsjö, Torbjörn (2002). "Why we ought to accept the repugnant conclusion". Utilitas. 14 (3): 339–359. doi:10.1017/S0953820800003642. S2CID 233360601.
  8. ^ Huemer, Michael (2008). "In Defence of Repugnance" (PDF). Mind. 117 (468): 899–933. doi:10.1093/mind/fzn079.
  9. ^ Arrhenius, Gustaf (2000). "An Impossibility Theorem for Welfarist Axiology". Economics and Philosophy. 16 (2): 247–266. doi:10.1017/S0266267100000249. S2CID 17700344.[dead link]
  10. ^ Narveson, Jan (1973). "Moral problems of population". The Monist. 57 (1): 62–86. doi:10.5840/monist197357134. PMID 11661014.
  11. ^ a b Beckstead, Nick (2013). On the overwhelming importance of shaping the far future (Thesis). New Brunswick, New Jersey: Rutgers University. doi:10.7282/T35M649T.
  12. ^ Parfit, Derek (1984) Reasons and Persons. Oxford: Oxford University Press, p. 391
  13. ^ McMahan, Jeff (2009). "Asymmetries in the Morality of Causing People to Exist". In Melinda A. Roberts and David T. Wasserman, eds., Harming Future Persons. Netherlands: Springer. pp. 49–68.
  14. ^ Frick, Johann David (2014). ‘Making People Happy, Not Making Happy People’: A Defense of the Asymmetry Intuition in Population Ethics. PhD Dissertation. Harvard University.
  15. ^ McMahan, Jeff (1981). "Problems of Population Theory". Ethics. 92 (1): 96–127.
  16. ^ Holtug, Nils (2004). "Person-affecting Moralities". In Jesper Ryberg and Torbjörn Tännsjö, eds., The Repugnant Conclusion. Dordrecht: Kluwer. pp. 129–61.
  17. ^ Narveson, Jan (1978). "Future People and Us". In R. I. Sikora and Brian Barry, eds., Obligations to Future Generations. Philadelphia: Temple University Press. pp. 38–60.
  18. ^ Algander, Per (2012). "A Defence of the Asymmetry in Population Ethics". Res Publica. 18 (2): 145–57.
  19. ^ Grill, Kalle (2017). "Asymmetric Population Axiology: Deliberative Neutrality Delivered". Philosophical Studies. 174 (1): 219–236.
  20. ^ Gloor, L. (2016). "The case for suffering-focused ethics". Foundational Research Institute.
  21. ^ Knutsson, S. (2019). "The world destruction argument". Inquiry, 1-20
  22. ^ a b MacAskill, William; Chappell, Richard Yetter (2021). "Population Ethics". Introduction to Utilitarianism. Retrieved 2021-07-23.
  23. ^ MacAskill, William; Greaves, Hilary; O’Keeffe-O’Donovan, Rossa; Trammell, Philip (2019). A Research Agenda for the Global Priorities Institute. Oxford: Global Priorities Institute, University of Oxford. p. 6.
  24. ^ Bostrom, Nick (2013). "Existential Risk Prevention as Global Priority". Global Policy. 4 (1): 15–31. doi:10.1111/1758-5899.12002.
  25. ^ Daniel, Max (2017) “S-risks: Why they are the worst existential risks, and how to prevent them”. Foundational Research Institute.
  26. ^ Baumann, Tobias (2017) “S-risks: An introduction”. Reducing Risks of Future Suffering.
  27. ^ Torres, Phil (2018).“Space colonization and suffering risks: Reassessing the 'maxipok rule'”. Futures, 100, 74-85.
  28. ^ Todd, Benjamin (2017-10-24). "Presenting the long-term value thesis". 80,000 Hours. Retrieved 2019-06-17.
  29. ^ Karnofsky, Holden (2014-07-03). "The Moral Value of the Far Future". Open Philanthropy Project. Retrieved 2019-06-17.
  30. ^ "Ben Delo". Giving Pledge. 2019-04-15. Retrieved 2019-06-17.

Further reading

[edit]


[edit]