Listen to this article

Trolley problem

From Wikipedia, the free encyclopedia
  (Redirected from Trolley problems)
Jump to navigation Jump to search
The trolley problem: should you pull the lever to divert the runaway trolley onto the side track?

The trolley problem is a thought experiment in ethics (that is, moral philosophy). The general form of the problem is this:

You see a runaway trolley moving toward five tied-up (or otherwise incapacitated) people lying on the main track. You are standing next to a lever that controls a switch. If you pull the lever, the trolley will be redirected onto a side track, and the five people on the main track will be saved. However, there is a single person lying on the side track. You have two options:

  1. Do nothing and allow the trolley to kill the five people on the main track.
  2. Pull the lever, diverting the trolley onto the side track where it will kill one person.

Which is the more ethical option? Or, more simply: What is the right thing to do?

Philippa Foot introduced this modern form of the problem in 1967.[1] Judith Thomson,[2][3] Frances Kamm,[4] and Peter Unger have also analysed the dilemma extensively.[5]

Earlier forms of the problem predated Foot's publication. Frank Chapman Sharp included a version in a moral questionnaire given to undergraduates at the University of Wisconsin in 1905. In this variation, the railway's switchman controlled the switch, and the lone individual to be sacrificed (or not) was the switchman's child.[6][7] The German legal scholar Hans Welzel [de] discussed a similar problem in 1951.[8] In his commentary on the Talmud, published long before his death in 1953, Avrohom Yeshaya Karelitz discussed the related question of whether it is ethical to deflect a projectile from a larger crowd toward a smaller one.[9]

Beginning in 2001, the trolley problem and its variants have been used extensively in empirical research on moral psychology. Trolley problems have also been a topic of popular books.[10] The problem arises in discussing the ethics of autonomous vehicle design, which may require programming to choose who or what to strike when a collision appears to be unavoidable.

Original dilemma[edit]

Foot's original structure of the problem ran as follows:

Suppose that a judge or magistrate is faced with rioters demanding that a culprit be found for a certain crime and threatening otherwise to take their own bloody revenge on a particular section of the community. The real culprit being unknown, the judge sees himself as able to prevent the bloodshed only by framing some innocent person and having him executed. Beside this example is placed another in which a pilot whose airplane is about to crash is deciding whether to steer from a more to a less inhabited area. To make the parallel as close as possible it may rather be supposed that he is the driver of a runaway tram which he can only steer from one narrow track on to another; five men are working on one track and one man on the other; anyone on the track he enters is bound to be killed. In the case of the riots the mob have five hostages, so that in both examples the exchange is supposed to be one man's life for the lives of five.[1]

A utilitarian view asserts that it is obligatory to steer to the track with one man on it. According to classical utilitarianism, such a decision would be not only permissible, but, morally speaking, the better option (the other option being no action at all).[11] An alternate viewpoint is that since moral wrongs are already in place in the situation, moving to another track constitutes a participation in the moral wrong, making one partially responsible for the death when otherwise no one would be responsible. An opponent of action may also point to the incommensurability of human lives. Under some interpretations of moral obligation, simply being present in this situation and being able to influence its outcome constitutes an obligation to participate. If this is the case, then deciding to do nothing would be considered an immoral act if one values five lives more than one.

Related problems[edit]

Five variants of the trolley problem: the original Switch, the Fat Man, the Fat Villain, the Loop and the Man in the Yard

The trolley problem is a specific ethical thought experiment among several that highlights the difference between deontological and consequentialist ethical systems. The central question that these dilemmas bring to light is on whether or not it is right to actively inhibit the utility of an individual if doing so produces a greater utility for other individuals.

The initial trolley problem also supports comparison to other, related, dilemmas:

The fat man[edit]

As before, a trolley is hurtling down a track towards five people. You are on a bridge under which it will pass, and you can stop it by putting something very heavy in front of it. As it happens, there is a very fat man next to you – your only way to stop the trolley is to push him over the bridge and onto the track, killing him to save five. Should you proceed?

Resistance to this course of action seems strong; when asked, a majority of people will approve of pulling the switch to save a net of four lives, but will disapprove of pushing the fat man to save a net of four lives.[12] This has led to attempts to find a relevant moral distinction between the two cases.

One clear distinction is that in the first case, one does not intend harm towards anyone – harming the one is just a side effect of switching the trolley away from the five. However, in the second case, harming the one is an integral part of the plan to save the five. This is an argument which Shelly Kagan considers (and ultimately rejects) in his first book The Limits of Morality.[13]

A claim can be made that the difference between the two cases is that in the second, you intend someone's death to save the five, and this is wrong, whereas, in the first, you have no such intention. This solution is essentially an application of the doctrine of double effect, which says that you may take action which has bad side effects, but deliberately intending harm (even for good causes) is wrong.

Another distinction is that the first case is similar to a pilot in an airplane that has lost power and is about to crash into a heavily populated area. Even if the pilot knows for sure that innocent people will die if he redirects the plane to a less populated area—people who are "uninvolved"—he will actively turn the plane without hesitation. It may well be considered noble to sacrifice your own life to protect others, but morally or legally allowing murder of one innocent person to save five people may be insufficient justification.[clarification needed]

The fat villain[edit]

The further development of this example involves the case, where the fat man is, in fact, the villain who put these five people in peril. In this instance, pushing the villain to his death, especially to save five innocent people, seems not only morally justifiable but perhaps even imperative.[14] This is essentially related to another thought experiment, known as ticking time bomb scenario, which forces one to choose between two morally questionable acts.

The loop variant[edit]

The claim that it is wrong to use the death of one to save five runs into a problem with variants like this:

As before, a trolley is hurtling down a track towards five people and you can divert it onto a secondary track. However, in this variant the secondary track later rejoins the main track, so diverting the trolley still leaves it on a track which leads to the five people. But, the person on the secondary track is a fat person who, when he is killed by the trolley, will stop it from continuing on to the five people. Should you flip the switch?

The only physical difference here is the addition of an extra piece of track. This seems trivial since the trolley will never travel down it. The reason this might affect someone's decision is that in this case, the death of the one actually is part of the plan to save the five.

The rejoining variant may not be fatal to the "using a person as a means" argument. This has been suggested by Michael J. Costa in his 1987 article "Another Trip on the Trolley", where he points out that if we fail to act in this scenario we will effectively be allowing the five to become a means to save the one. If we do nothing, then the impact of the trolley into the five will slow it down and prevent it from circling around and killing the one.[citation needed] As in either case some will become a means to saving others, we are permitted to count the numbers. This approach requires that we downplay the moral difference between doing and allowing.

Transplant[edit]

Here is an alternative case, due to Judith Jarvis Thomson,[3] containing similar numbers and results, but without a trolley:

A brilliant transplant surgeon has five patients, each in need of a different organ, each of whom will die without that organ. Unfortunately, there are no organs available to perform any of these five transplant operations. A healthy young traveler, just passing through the city the doctor works in, comes in for a routine checkup. In the course of doing the checkup, the doctor discovers that his organs are compatible with all five of his dying patients. Suppose further that if the young man were to disappear, no one would suspect the doctor. Do you support the morality of the doctor to kill that tourist and provide his healthy organs to those five dying people and save their lives?

The man in the yard[edit]

Unger argues extensively against traditional non-utilitarian responses to trolley problems. This is one of his examples:

As before, a trolley is hurtling down a track towards five people. You can divert its path by colliding another trolley into it, but if you do, both will be derailed and go down a hill, and into a yard where a man is sleeping in a hammock. He would be killed. Should you proceed?

Responses to this are partly dependent on whether the reader has already encountered the standard trolley problem (since there is a desire to keep one's responses consistent), but Unger notes that people who have not encountered such problems before are quite likely to say that, in this case, the proposed action would be wrong.

Unger therefore argues that different responses to these sorts of problems are based more on psychology than ethics – in this new case, he says, the only important difference is that the man in the yard does not seem particularly "involved". Unger claims that people therefore believe the man is not "fair game", but says that this lack of involvement in the scenario cannot make a moral difference.

Unger also considers cases which are more complex than the original trolley problem, involving more than just two results. In one such case, it is possible to do something which will (a) save the five and kill four (passengers of one or more trolleys and/or the hammock-sleeper), (b) save the five and kill three, (c) save the five and kill two, (d) save the five and kill one, or (e) do nothing and let five die.

Empirical research[edit]

In 2001, Joshua Greene and colleagues published the results of the first significant empirical investigation of people's responses to trolley problems.[15] Using functional magnetic resonance imaging, they demonstrated that "personal" dilemmas (like pushing a man off a footbridge) preferentially engage brain regions associated with emotion, whereas "impersonal" dilemmas (like diverting the trolley by flipping a switch) preferentially engaged regions associated with controlled reasoning. On these grounds, they advocate for the dual-process account of moral decision-making. Since then, numerous other studies have employed trolley problems to study moral judgment, investigating topics like the role and influence of stress,[16] emotional state,[17] impression management,[18] levels of anonymity, [19] different types of brain damage,[20] physiological arousal,[21] different neurotransmitters,[22] and genetic factors[23] on responses to trolley dilemmas.

Survey data[edit]

The trolley problem has been the subject of many surveys in which approximately 90% of respondents have chosen to kill the one and save the five.[24] If the situation is modified where the one sacrificed for the five was a relative or romantic partner, respondents are much less likely to be willing to sacrifice their life.[25]

A 2009 survey published in a 2013 paper by David Bourget and David Chalmers shows that 69.9% of professional philosophers would switch (sacrifice the one individual to save five lives) in the case of the trolley problem. 8% would not switch, and the remaining 24% had another view or could not answer.[26]

Implications for autonomous vehicles[edit]

Problems analogous to the trolley problem arise in the design of software to control autonomous cars. Situations could occur in which a potentially fatal collision appears to be unavoidable, but in which choices made by the car's software, such as who or what crash into, can affect the particulars of the deadly outcome. For example, should the software value the safety of the car's occupants more, or less, than that of potential victims outside the car.[27][28][29][30][31]

A platform called Moral Machine[32] was created by MIT Media Lab to allow the public to express their opinions on what decisions autonomous vehicles should make in scenarios that use the trolley problem paradigm. Analysis of the data collected through Moral Machine showed broad differences in relative preferences among different countries.[33] Other approaches make use of virtual reality to assess human behavior in experimental settings.[34][35][36][37] However, some argue that the investigation of trolley-type cases is not necessary to address the ethical problem of driverless cars, because the trolley cases have a serious practical limitation. It would need to be top-down plan in order to fit the current approaches of addressing emergencies in artificial intelligence.[38]

There is also the question of whether the law should dictate the ethical standards that all autonomous vehicles must use, or whether individual autonomous car owners or drivers should determine their car's ethical values, such as favoring safety of the owner or the owner's family over the safety of others. Although most people would not be willing to use an automated car that might sacrifice themselves in a life-or-death dilemma, some[who?] believe the somewhat counterintuitive claim that using mandatory ethics values would nevertheless be in their best interest. According to Gogoll and Müller, "the reason is, simply put, that [personalized ethics settings] would most likely result in a prisoner’s dilemma."[39]

In 2016, the German government appointed a commission to study the ethical implications of autonomous driving.[40] The commission adopted 20 rules to be implemented in the laws that will govern the ethical choices that autonomous vehicles will make.

In popular culture[edit]

In an urban legend that has existed since at least the mid-1960s, the decision is described as having been made in real life by a drawbridge keeper who was forced to choose between sacrificing a passenger train and his own four-year-old son.[41] There is a 2003 Czech short film titled Most (The Bridge in English) and The Bridge (US) which deals with a similar plot.[42] This version is often given as an illustration of the Christian belief that God sacrificed his son, Jesus Christ.[41]

In the 2010 video game Fable 3, one of the earliest moral choices players make involves having to choose to execute either their childhood sweetheart or a crowd of protesters. If a decision is not made within a certain period of time, the king announces that the player has five seconds to make up their mind, "or they all die."

Some games such as The Trolley Problem Game[43] and Moral Machine[44] have made interactive games out of the thought experiment.

In 2016, a Facebook page under the name "Trolley Problem Memes" was recognized for its popularity on Facebook.[45] The group administration commonly shares comical variations of the trolley problem and often mixes in multiple types of philosophical dilemmas.[46] A common joke among the users regards "multi-track drifting", in which the lever is pulled after the first set of wheels pass the track, thereby creating a third, often humorous, solution, where all six people tied to the tracks are run over by the trolley, or are spared if the trolley derails.[47]

A trolley problem experiment was conducted in Season 2 Episode 1 of the YouTube Red series Mind Field, presented by Michael Stevens.[48] However, no paper was published on the findings.

The trolley problem forms the major plot premise of Season 2 Episode 5, "The Trolley Problem", in The Good Place.[49] It is later referenced and solved in the second season within the context of the universe of the show by Michael (Ted Danson), who states that self-sacrifice is the only solution.

In 2019, “The Trolley Problem”, written by Bo Robinson, made its world premiere. The plays out a scenario in which one indecisive girl must choose to kill either a family of five on one track or a complete stranger on the other. The show evaluates various ethical and legal dilemmas in the Trolley Problem, and even includes the “Fat Man Variation” in the play, showing the audience the many different possibilities and outcomes that the Trolley Problem really has. The play won the Georgia Thespian Society Playworks Competition in 2018.

Criticism[edit]

In a 2014 paper published in the Social and Personality Psychology Compass,[50] researchers criticized the use of the trolley problem, arguing, among other things, that the scenario it presents is too extreme and unconnected to real-life moral situations to be useful or educational.[51]

Brianna Rennix and Nathan J. Robinson of Current Affairs go even further and assert that the thought experiment is not only useless but downright detrimental to human psychology. The authors are opining that to make cold calculations about hypothetical situations in which every alternative will result in one or more gruesome deaths is to encourage a type of thinking that is devoid of human empathy and assumes a mandate to decide who lives or dies. They also question the premise of the scenario. "If I am forced against my will into a situation where people will die, and I have no ability to stop it, how is my choice a “moral” choice between meaningfully different options, as opposed to a horror show I’ve just been thrust into, in which I have no meaningful agency at all?"[52]

In her 2017 paper published in the Science, Technology, and Human Values, Nassim JafariNaimi[53] lays out the reductive nature of the trolley problem in framing ethical problems that serves to uphold an impoverished version of utilitarianism. She argues that the popular argument that the trolley problem can serve as a template for algorithmic morality is based on fundamentally flawed premises that serve the most powerful with potentially dire consequences on the future of cities.

In 2017, in his book On Human Nature, Roger Scruton criticises the usage of ethical dilemmas such as the trolley problem and their usage by philosophers such as Derek Parfit and Peter Singer as ways of illustrating their ethical views. Scruton writes, "These "dilemmas" have the useful character of eliminating from the situation just about every morally relevant relationship and reducing the problem to one of arithmetic alone." Scruton believes that just because one would choose to change the track so that the train hits the one person instead of the five does not mean that they are necessarily a Consequentialism. As a way of showing the flaws in consequentialist responses to ethical problems, Scruton points out paradoxical elements of belief in utilitarianism and similar beliefs. He believes that Nozick's experience machine thought experiment definitively disproves hedonism. [54]

In a 2018 article published in Psychological Review,[55] researchers pointed out that, as measures of utilitarian decisions, sacrificial dilemmas such as the trolley problem measure only one facet of proto-utilitarian tendencies, namely permissive attitudes toward instrumental harm, while ignoring impartial concern for the greater good. As such, the authors argued that the trolley problem provides only a partial measure of utilitarianism.

See also[edit]

References[edit]

  1. ^ a b Philippa Foot, "The Problem of Abortion and the Doctrine of the Double Effect" in Virtues and Vices (Oxford: Basil Blackwell, 1978) (originally appeared in the Oxford Review, Number 5, 1967.)
  2. ^ Judith Jarvis Thomson, Killing, Letting Die, and the Trolley Problem, 59 The Monist 204-17 (1976)
  3. ^ a b Judith Jarvis Thomson, "The Trolley Problem", 94 Yale Law Journal 1395–1415 (1985)
  4. ^ Francis Myrna Kamm, "Harming Some to Save Others", 57 Philosophical Studies 227-60 (1989)
  5. ^ Peter Unger, Living High and Letting Die (Oxford: Oxford University Press, 1996)
  6. ^ Frank Chapman Sharp, A Study of the Influence of Custom on the Moral Judgment Bulletin of the University of Wisconsin no.236 (Madison, June 1908), 138.
  7. ^ Frank Chapman Sharp, Ethics (New York: The Century Co, 1928), 42-44, 122.
  8. ^ Hans Welzel, ZStW Zeitsdischrift für die gesamte Strafrechtswissenschaft 63 [1951], 47ff.
  9. ^ Hazon Ish, HM, Sanhedrin #25, s.v. "veyesh leayen". Available online, http://hebrewbooks.org/14332, page 404
  10. ^ Bakewell, Sarah (2013-11-22). "Clang Went the Trolley". The New York Times.
  11. ^ Barcalow, Emmett, Moral Philosophy: Theories and Issues. Belmont, CA: Wadsworth, 2007. Print.
  12. ^ Peter Singer, Ethics and Intuitions The Journal of Ethics (2005). http://www.utilitarian.net/singer/by/200510--.pdf
  13. ^ Shelly Kagan, The Limits of Morality (Oxford: Oxford University Press, 1989)
  14. ^ Carneades.org (2013-10-07), The Fat Villain Trolley Problem (90 Second Philosophy), retrieved 2016-09-04
  15. ^ Greene, Joshua D.; Sommerville, R. Brian; Nystrom, Leigh E.; Darley, John M.; Cohen, Jonathan D. (2001-09-14). "An fMRI Investigation of Emotional Engagement in Moral Judgment". Science. 293 (5537): 2105–2108. Bibcode:2001Sci...293.2105G. doi:10.1126/science.1062872. ISSN 0036-8075. PMID 11557895.
  16. ^ Youssef, Farid F.; Dookeeram, Karine; Basdeo, Vasant; Francis, Emmanuel; Doman, Mekaeel; Mamed, Danielle; Maloo, Stefan; Degannes, Joel; Dobo, Linda (2012). "Stress alters personal moral decision making". Psychoneuroendocrinology. 37 (4): 491–498. doi:10.1016/j.psyneuen.2011.07.017. PMID 21899956.
  17. ^ Valdesolo, Piercarlo; DeSteno, David (2006-06-01). "Manipulations of Emotional Context Shape Moral Judgment". Psychological Science. 17 (6): 476–477. doi:10.1111/j.1467-9280.2006.01731.x. ISSN 0956-7976. PMID 16771796.
  18. ^ Rom, Sarah C., Paul, Conway (2017-08-30). "The strategic moral self:self-presentation shapes moral dilemma judgments". Journal of Experimental Social Psychology. 74: 24–37. doi:10.1016/j.jesp.2017.08.003. ISSN 0022-1031.
  19. ^ Lee, Minwoo; Sul, Sunhae; Kim, Hackjin (2018-06-18). "Social observation increases deontological judgments in moral dilemmas". Evolution and Human Behavior. 39 (6): 611–621. doi:10.1016/j.evolhumbehav.2018.06.004. ISSN 1090-5138.
  20. ^ Ciaramelli, Elisa; Muccioli, Michela; Làdavas, Elisabetta; Pellegrino, Giuseppe di (2007-06-01). "Selective deficit in personal moral judgment following damage to ventromedial prefrontal cortex". Social Cognitive and Affective Neuroscience. 2 (2): 84–92. doi:10.1093/scan/nsm001. ISSN 1749-5024. PMC 2555449. PMID 18985127.
  21. ^ Navarrete, C. David; McDonald, Melissa M.; Mott, Michael L.; Asher, Benjamin (2012-04-01). "Virtual morality: Emotion and action in a simulated three-dimensional "trolley problem"". Emotion. 12 (2): 364–370. doi:10.1037/a0025561. ISSN 1931-1516. PMID 22103331.
  22. ^ Crockett, Molly J.; Clark, Luke; Hauser, Marc D.; Robbins, Trevor W. (2010-10-05). "Serotonin selectively influences moral judgment and behavior through effects on harm aversion". Proceedings of the National Academy of Sciences. 107 (40): 17433–17438. Bibcode:2010PNAS..10717433C. doi:10.1073/pnas.1009396107. ISSN 0027-8424. PMC 2951447. PMID 20876101.
  23. ^ Bernhard, Regan M.; Chaponis, Jonathan; Siburian, Richie; Gallagher, Patience; Ransohoff, Katherine; Wikler, Daniel; Perlis, Roy H.; Greene, Joshua D. (2016-12-01). "Variation in the oxytocin receptor gene (OXTR) is associated with differences in moral judgment". Social Cognitive and Affective Neuroscience. 11 (12): 1872–1881. doi:10.1093/scan/nsw103. ISSN 1749-5016. PMC 5141955. PMID 27497314.
  24. ^ "'Trolley Problem': Virtual-Reality Test for Moral Dilemma – TIME.com". TIME.com.
  25. ^ Journal of Social, Evolutionary, and Cultural Psychology Archived 2012-04-11 at the Wayback MachineISSN 1933-5377 – volume 4(3). 2010
  26. ^ Bourget, David; Chalmers, David J. (2013). "What do Philosophers believe?". Retrieved 11 May 2013.
  27. ^ Patrick Lin (October 8, 2013). "The Ethics of Autonomous Cars". The Atlantic.
  28. ^ Tim Worstall (June 18, 2014). "When Should Your Driverless Car From Google Be Allowed To Kill You?". Forbes.
  29. ^ Jean-François Bonnefon; Azim Shariff; Iyad Rahwan (October 13, 2015). "Autonomous Vehicles Need Experimental Ethics: Are We Ready for Utilitarian Cars?". Science. 352 (6293): 1573–1576. arXiv:1510.03346. Bibcode:2016Sci...352.1573B. doi:10.1126/science.aaf2654. PMID 27339987.
  30. ^ Emerging Technology From the arXiv (October 22, 2015). "Why Self-Driving Cars Must Be Programmed to Kill". MIT Technology review.
  31. ^ Bonnefon, Jean-François; Shariff, Azim; Rahwan, Iyad (2016). "The social dilemma of autonomous vehicles". Science. 352 (6293): 1573–1576. arXiv:1510.03346. Bibcode:2016Sci...352.1573B. doi:10.1126/science.aaf2654. PMID 27339987.
  32. ^ "Moral Machine".
  33. ^ Awad, Edmond; Dsouza, Sohan; Kim, Richard; Schulz, Jonathan; Henrich, Joseph; Shariff, Azim; Bonnefon, Jean-François; Rahwan, Iyad (October 24, 2018). "The Moral Machine experiment". Nature. 563 (7729): 59–64. doi:10.1038/s41586-018-0637-6. PMID 30356211.
  34. ^ Sütfeld, Leon R.; Gast, Richard; König, Peter; Pipa, Gordon (2017). "Using Virtual Reality to Assess Ethical Decisions in Road Traffic Scenarios: Applicability of Value-of-Life-Based Models and Influences of Time Pressure". Frontiers in Behavioral Neuroscience. 11: 122. doi:10.3389/fnbeh.2017.00122. PMC 5496958. PMID 28725188.
  35. ^ Skulmowski, Alexander; Bunge, Andreas; Kaspar, Kai; Pipa, Gordon (December 16, 2014). "Forced-choice decision-making in modified trolley dilemma situations: a virtual reality and eye tracking study". Frontiers in Behavioral Neuroscience. 8: 426. doi:10.3389/fnbeh.2014.00426. PMC 4267265. PMID 25565997.
  36. ^ Francis, Kathryn B.; Howard, Charles; Howard, Ian S.; Gummerum, Michaela; Ganis, Giorgio; Anderson, Grace; Terbeck, Sylvia (October 10, 2016). "Virtual Morality: Transitioning from Moral Judgment to Moral Action?". PLOS ONE. 11 (10): e0164374. Bibcode:2016PLoSO..1164374F. doi:10.1371/journal.pone.0164374. ISSN 1932-6203. PMC 5056714. PMID 27723826.
  37. ^ Patil, Indrajeet; Cogoni, Carlotta; Zangrando, Nicola; Chittaro, Luca; Silani, Giorgia (January 2, 2014). "Affective basis of judgment-behavior discrepancy in virtual experiences of moral dilemmas". Social Neuroscience. 9 (1): 94–107. doi:10.1080/17470919.2013.870091. ISSN 1747-0919. PMID 24359489.
  38. ^ Himmelreich, Johannes (June 1, 2018). "Never Mind the Trolley: The Ethics of Autonomous Vehicles in Mundane Situations". Ethical Theory and Moral Practice. 21 (3): 669–684. doi:10.1007/s10677-018-9896-4. ISSN 1572-8447.
  39. ^ Gogoll, Jan; Müller, Julian F. (June 1, 2017). "Autonomous Cars: In Favor of a Mandatory Ethics Setting". Science and Engineering Ethics. 23 (3): 681–700. doi:10.1007/s11948-016-9806-x. ISSN 1471-5546. PMID 27417644.
  40. ^ BMVI Commission (June 20, 2016). "Bericht der Ethik-Kommission Automatisiertes und vernetztes Fahren". Federal Ministry of Transport and Digital Infrastructure (German: Bundesministerium für Verkehr und digitale Infrastruktur). Archived from the original on November 15, 2017. Cite uses deprecated parameter |dead-url= (help)
  41. ^ a b Barbara Mikkelson (27 February 2010). "The Drawbridge Keeper". Snopes.com. Retrieved 20 April 2016.
  42. ^ lewis-8 (25 January 2003). "Most (2003)". IMDb.
  43. ^ "The Trolley Problem Game". Newfa Stuff. Retrieved 2019-01-31.
  44. ^ "Moral Machine". Moral Machine. Retrieved 2019-01-31.
  45. ^ Feldman, Brian (9 August 2016). "The Trolley Problem Is the Internet's Most Philosophical Meme". 2017, New York Media LLC. Retrieved 25 May 2017.
  46. ^ Raicu, Irina (8 June 2016). "Modern variations on the 'Trolley Problem' meme". Vox Media, Inc. Retrieved 25 May 2017.
  47. ^ Zhang, Linch (1 June 2016). "Behind the Absurd Popularity of Trolley Problem Memes". TheHuffingtonPost.com, Inc. Retrieved 25 May 2017.
  48. ^ Stevens, Michael (6 December 2017). "The Greater Good - Mind Field S2 (Ep 1)". youtube.com. Vsauce. Retrieved 23 December 2018.
  49. ^ Perkins, Dennis (October 19, 2017). "Chidi wrestles with "The Trolley Problem" on a brilliantly funny The Good Place". avclub.com. The Onion. Retrieved March 28, 2018.
  50. ^ Bauman, Christopher W.; McGraw, A. Peter; Bartels, Daniel M.; Warren, Caleb (September 4, 2014). "Revisiting External Validity: Concerns about Trolley Problems and Other Sacrificial Dilemmas in Moral Psychology". Social and Personality Psychology Compass. 8 (9): 536–554. doi:10.1111/spc3.12131.
  51. ^ Khazan, Olga (July 24, 2014). "Is One of the Most Popular Psychology Experiments Worthless?". The Atlantic.
  52. ^ Rennix, Brianna; Robinson, Nathan J. (November 3, 2017). "The Trolley Problem Will Tell You Nothing Useful About Morality". Current Affairs.
  53. ^ JafariNaimi, Nassim (2018). "Our Bodies in the Trolley's Path, or Why Self-driving Cars Must *Not* Be Programmed to Kill". Science, Technology, & Human Values. 43 (2): 302–323. doi:10.1177/0162243917718942.
  54. ^ Scruton, Roger (2017). On Human Nature (1st ed.). Princeton. pp. 79–112. ISBN 978-0-691-18303-9.
  55. ^ Kahane, Guy; Everett, Jim A. C.; Earp, Brian D.; Caviola, Lucius; Faber, Nadira S.; Crockett, Molly J.; Savulescu, Julian (March 2018). "Beyond sacrificial harm: A two-dimensional model of utilitarian psychology". Psychological Review. 125 (2): 131–164. doi:10.1037/rev0000093.

External links[edit]