Talk:Trolley problem

From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Philosophy (Rated B-class, Mid-importance)
WikiProject icon This article is within the scope of WikiProject Philosophy, a collaborative effort to improve the coverage of content related to philosophy on Wikipedia. If you would like to support the project, please visit the project page, where you can get more details on how you can help, and where you can join the general discussion about philosophy content on Wikipedia.
B-Class article B  This article has been rated as B-Class on the project's quality scale.
 Mid  This article has been rated as Mid-importance on the project's importance scale.
 
Wikipedia Version 1.0 Editorial Team / v0.7
WikiProject icon This article has been reviewed by the Version 1.0 Editorial Team.
Taskforce icon
This article has been selected for Version 0.7 and subsequent release versions of Wikipedia.
 
Note icon
This article is included in the 2006 Wikipedia CD Selection, or is a candidate for inclusion in the next version. Please maintain high quality standards and, if possible, stick to GFDL-compatible images.
B-Class article B  This article has been rated as B-Class on the quality scale.

General Comments[edit]

This article should be rewritten. It appears to be almost entirely POV. A good approach would be to focus on the relevant research rooted in instinct and evolutionary biology. There is quite a bit out there. 99.29.150.13 (talk) 01:50, 30 September 2010 (UTC)

I think that this page is problematic. It does explain the Trolley problem but the discussion is very one-sided. It does not do a good job explaining the different points of view in the debate and giving references. It also doesn't mention the criticism that the scenario is highly artificial and unrealistic, that a trolley cannot run in a loop on its own, that in all scenarios there are alternative paths of action (e.g. warning the people in danger), that there is no guarantee that throwing somebody in front of the trolley would derail it and save the other people, etc. I am not familiar enough with the literature to improve the article but I hope somebody is up to the challenge. —Preceding unsigned comment added by 130.184.20.15 (talk) 20:22, 13 November 2009 (UTC)


The Unger reference lacks a complete citation. Can anyone fill in? The biography is still very scanty. Does anyone know of a more complete "trolley problem" bibliography?

Alright, I've done the Unger citation.Evercat 17:11, 22 Aug 2003 (UTC)
I also removed the link to virtue ethics, which I'm not sure is too relevant - the trolley problem is mostly a major point of contention between utilitarians and deontologists... Evercat 17:11, 22 Aug 2003 (UTC)
This is just a bit tricky. Foot, who came up with the Trolley Problem in the first place, advanced it as part of her very large metaethical project, which is ultimately aimed at virtue ethics. Her original paper is in her anthology virtues and vices. So it seemed to me that a link to virtue ethics was appropriate. I'll let this hang for a few days before I put the link back. Kudos to Evercat for the work on this entry. User:Lsolum
Oh, and I'm hoping to add some more to this soon. It's hopelessly incomplete at the moment. Evercat 17:11, 22 Aug 2003 (UTC)
Agreed. It would be nice to get some of the substance in. Perhaps from Foot's original essay & then Thompson. Warren Quinn also has some nice discussion. User:Lsolum

Gee, one might think that since the author of the original Problem was a virtue ethicist, there would be some mention/treatment of virtue-ethics thinking on a solution. Seems like a major lacuna in the article, one that would disqualify it from any sort of decent quality rating. JKeck (talk) 20:27, 18 October 2011 (UTC)



I'm kinda torn because I have a nice example that I think destroys the argument that it's wrong to use someone's death, but it's my own idea and would violate Wikipedia's prohibition on "original research" (of course, it's entirely possible someone else has thought of this:

As before, a trolley is hurtling down a track towards five people. As in the first case, you can divert it onto a seperate track. On this track is a single fat man. However, beyond the fat man, this track turns back onto the main line, and if it wasn't for the presence of the fat man, flipping the switch would not save the five. Should you flip the switch?

I think the answer must be yes, because the only difference between this case and the original one is that there's an extra bit of track. That can't matter. Yet in this new case, you actually do use the death of the one to save the five - his death is part of the plan... Evercat 18:09, 22 Aug 2003 (UTC)

Very nice example. Michael Otsuka (now at the Unversity of London) developed a series of similar cases in a seminar he did at UCLA several years ago. User:Lsolum

Ah. Perhaps I can find something close enough to be able to use it, then. :-) Oh, and if you feel virtue ethics is relevant, then go ahead and add it back in. Evercat 18:18, 22 Aug 2003 (UTC)


Ah I've got it. My version is similar to the "loop variant", I think (though the shape of the track is slightly different, I'm not sure if this makes a difference):

             /=====F====\
            //          \\
           //           //         Loop variant
 T -> =====/====PPPPP===/

The above is the loop variant, with P indicating a thin person and F indicating the fat man...

             /=====F====\
            //          \\         My variant
           //            \\    
 T -> =====/==============\===PPPPP=====

This was my idea - the difference between the two is that in the first case, if the five were absent the one would die (because of the loop), whereas in mine the one would be in no danger, even if the five were absent.

I don't know if this makes a difference... in the first case the fat man is more involved than in the second, since in the first case the deaths of the five are necessary for his survival... this might be claimed to make a difference.

So I prefer my version, but have used the loop version since that's the one that's not original research. :-) Evercat 12:37, 23 Aug 2003 (UTC)


Thing is, I don't think this takes the Categorical Imperative out of the picture. I just think that, in this case, it's wrong to divert the train. And I think a lot of people would agree. This quite closely resembles pushing a bird-watching chap in front of the train, which many people also agree is wrong. Think about it. Would you intentionally kill someone to save others? I think this demonstrates the principal of not using a life as a means to an end in action. —Preceding unsigned comment added by 218.215.153.208 (talk) 05:57, 7 March 2009 (UTC)


Man, I love Wikipedia's high Google page-rank. Trolley problem is already the top hit on a Google search for "trolley problem". :-) Evercat 13:28, 23 Aug 2003 (UTC)

Or it was. Gone now, and I can't find it on Google at all. Odd. Evercat 12:00, 24 Aug 2003 (UTC)

Evercat, it will come back. I've noticed this with other Wikipedia pages & with other pages as well. My hypothesis is that it has to do with load times. If Google revisits a page & the load time exceeds some threshold value, it eliminates the page. Anyone else know what's going on? User:Lsoum

the trolley, the fat man and moral hypocisy[edit]

I view the dilemma of the switched trolley and pushing the fat man in the path of the trolley rather differently.

The fine point of diverting harm or putting someone in harm's way seems to me to be morally equivalent, if the person throwing the switch knows, that by doing so, a person will be killed who would otherwise have survived.

The difference I see has a slight air of hypocricy to it, because throwing the switch gives you a certain deniability for your action, where pushing someone does not, but you know the result will be the same. So perhaps the person who thinks throwing the switch is OK but pushing the fat man is not is indulging in a little moral cowardice, by being unwilling to face the certain result of his actions. Cecropia 06:45, 28 Mar 2004 (UTC)

The need of the many outway the needs of the few. Flip the switch.. —Preceding unsigned comment added by 194.105.120.80 (talk) 20:40, 7 October 2010 (UTC)

I don't see it as hypocritical, If I flip the switch, I am doing so in order to avoid 5 people, I'm not trying to hit the one person. The loss of the one person on the other track is regrettable. I have no choice but to hit someone. In the scenario with the fat man, I would be actively choosing to kill him if I were to push him. They are equivalent if you only look at it from a purely consequentialist perspective.--RLent (talk) 06:43, 22 January 2008 (UTC)
I don't see it as being hypocritical either. The difference for me is not so much that you know you're doing something wrong, so much as it is that at a very deep emotional level I don't know it will work. The satisfaction that everything will work exactly the way it's designed that the switch provides just isn't there. I believe, even if you state clearly with the "voice of God" that the fat man will definitely stop the train, people will still be more uneasy with the bridge scenario than the switch scenario. If, however, you made the scenario more switch-like, e.g. "the mad philosopher rigged the train with a motion detector that brakes when it hits someone, and the five passengers are a mile away, before shooting himself so he couldn't affect the outcome", then (at least to me) the option of pushing the fat man in front of the train seems a lot more palatable. This even adds a bit of "Kobayashi Maru" fun to the dilemma; the answerer is free to imagine scenarios where he pushes the fat man in front of the train in a way that ends up saving his life. Even if you state firmly that this will never actually happen. —Preceding unsigned comment added by 173.17.184.142 (talk) 21:35, 3 December 2010 (UTC)

My responsibility[edit]

What if the fat man on the bridge is me, and I'm all alone? Then I have the option to either commit suicide by jumping down on the trail to save the 5 people OR they'll be run over by the trolley. If I jump I will certainly die but the other ones will live. What good is saving the 5 people when you're dead? On the otherhand since I'm not taking any physical action (like pushing a man over the bridge) will I have the same "blood" on my hands?

This was the thought of Anton Tyrberg.

What good is living when the entire world will see you as a lazy, self-serving coward? -- 12.116.162.162 15:48, 1 November 2007 (UTC)
To sacrifice yourself so that others may live is commendable, but it is "above and beyond the call of duty". It is something that people may highly praise you for if you do it, but also that people will not condemn you for not doing it. As for the question of what good is to sacrifice yourself if you die in the process, that's something only the individual in question can answer for themselves. --RLent (talk) 06:33, 22 January 2008 (UTC)

Up the garden path?[edit]

In the absence of sufficient qualifying information for the original problem, it may be justifiably argued that only one person will die in bringing the trolley to a halt, regardless of the position of the switch. It will be either the first of the five or the single person. The real dilemma then becomes which of two people is to die, not whether one should be sacrificed to save five.

The Wanderer

Who says that hitting the people will stop the trolley? The trolley might run everyone over and keep going. If you try to solve an ethics issue by attacking the basis, that raises a different ethics issue. -- Cecropia 00:48, 3 March 2006 (UTC)
I agree the fact that his plan might fail and he might kill all five of the people (you can even guarantee this as a condition of the choice when setting up the problem) must be considered if you make decisions the way he did. But I disagree with your suggestion that he's trying to attack the basis of the problem or dodge the question. His plan runs a high risk of failure, or may even be doomed to failure, but he's not attempting to change the scenario at all, just his way of examining the scenario. Also, the fat man problem given in tandem with this one, in which one overweight person is able to stop an entire train, means that there's even merit in questioning the basis of the trolley problem. At least, a definition of the problem that is consistent between both questions should be agreed upon before anyone attempts to address it directly.

Suggested Revision[edit]

I don't get the bit about how the version of the trolley story contained in the text causes trouble for the 2nd formulation of the Categorical Imperative. I could be wrong here but I think that it isn't. Intuition tells us you may flip the switch. The CI tells you that you may not if your plan for action involves using a person merely as a means to some end. Since you aren't aiming at the person on the second track, I don't get this at all. Now, the loop case combined with the original case does cause trouble. As Thomson puts it, the extra bit of track seems not to matter at all. However, the only reason you could have to switch in the loop case (as she tells it) is that you switch in order to hit the fat man and stop the trolley. Here the man figures in your plan as a means to your intended end. We think (intuitively) that the extra bit of track makes no difference so if you should switch in the original case, you should switch in loop. However, the CI tells us that in loop, one may not switch. I think that at the very least, someone should explain why the CI forbids switching.


The loop variant may not be fatal to the 'using a person as a means' argument. This has been suggested by M.Costa "another trip on the trolley", who points out that if we fail to act in this scenario we will effectively be allowing the five as a means to saving the one, as the five will slow the train down and prevent it from circling around and killing the one. As in either case some will become a means to saving others, then we are permitted to count the numbers. This approach requires that we downplay the moral difference between doing and allowing.


A. Woodley

Wrong. The CI is pretty clear about this. You may not choose to do either (see the above reasons), but the conclusion is that you may simply not interfere. Unless, of course, you act out of (rational) moral responsibility, in which case you must fulfill that responsibility.
The idea is that if both, doing OR allowing are immoral and there are no other options, allowing is the way to go, unless your moral responsibility (that is, the maximes of your actions) says otherwise. So with Kant, there IS a difference between doing and allowing in that allowing is a valid option if all other options are not. With Kant there are no degrees of unethicalness: it's either an ethical action or not. In this case neither is, but inaction is the preferable one by default. — Ashmodai (talk · contribs) 16:08, 23 August 2006 (UTC)


Effect of Size/weight of trolley and number of people on board

question arises why a shopping trolley should be fatal as they only weigh a kilo or so..., also, it's well known that shopping trolleys don't move in a predictable way so switch flipping isn't likely to affect the outcome. laissez faire shop result in no fatalities and thus no dilemma. otoh if you replace a trolley with a train.....does the number of people on board the train affect your choice......would running over 5 to stop a train save 100 on board ???

Zeph

Is that a joke? We're not talking about shopping trolleys. -- Cecropia 00:45, 3 March 2006 (UTC)
Probably a misunderstanding. We are indeed talking about a train without passengers or crew. I think this may be an exclusively American use of the word, though. It certainly confused me the first time I read the intro. — Ashmodai (talk · contribs) 16:08, 23 August 2006 (UTC)


It says that Alastair Norcross mightaccept the transplant in exceedingly unlikely situations. Where has he said this? Please cite.

Odd statement...[edit]

"This is puzzling, because, in flipping the switch, you are not passively allowing the death of the one on the sidetrack, but actively causing her death. It looks like a case of killing, not just a case of letting die. And we don't generally make favorable moral judgments about those who kill others, even if their actions have good consequences as well." I've cut this statement, as it is highly unencylopedic. There are two major problems: 1) "We don't generally make favorable moral judgments about those who kill others, even if their actions have good consequences as well." Who is "we"? Alos, "actively causing HER death" (emphasis added). Since no gender is given for the person on the track, shouldn't we use the generaic "his"? LaszloWalrus 22:49, 19 May 2006 (UTC)

There are heated debates over the question whether "his" or "her" is more politically correct if you don't know the gender of the person, but I don't care as long as either is used consistently throughout the article.
"We" probably refers to humans in general, although it is arguably a mere generalisation in this case. This sentence neglects the soldier problem ("Are soldiers murderers and if not, what sets them morally apart?" or some version of that), for example. — Ashmodai (talk · contribs) 16:13, 23 August 2006 (UTC)

Cultural bias?[edit]

Coming from a German educational background, and thus a Kantian idea of ethics (unlike Utilitarianism, which is more common in Anglophone cultures), the reasoning seems flawed to me.

From a Kantian POV you are required not to interfere with the course of actions, if you think it would be unethical to kill anyone intentionally. Since in all scenarios it is obvious that any intervention would result directly in the loss of innocent (and uninvolved) life, any intervention would thus equal murder -- direct (as by pushing the fat man off the bridge) or indirectly (as by pushing a switch to divert the trolley).

Of course inaction would result in the, utilitarianistically (if that is even a word) worse, loss of innocent lives, but for Kantian ethics this does not seem particularily important.

The more interesting dilemma is the "plane heading for a nuclear reactor" one, in which you have the choices of intercepting the plane, therefore killing the innocent passangers, or don't, therefore letting the plane hit the nuclear reactor and kill not only the passangers, but also everyone in the vicinity of the reactor (not to mention the long-term damage caused by the radiation).

In that dilemma the lives that would be lost by you deliberately if you intervened are a part of the sum of the lives lost if you did not. That means you could actually save other people's lives by killing a few of them. Again, Kantian ethics forbid any intervention on the grounds of the Kantian Imperative, if you deem murder immoral (otherwise, you would say that it's not neccessarily unethical). — Ashmodai (talk · contribs) 15:15, 23 August 2006 (UTC)

Your view is not unheard-of, even in the US, but definitely seems to be in the minority among philosophers (even those heavily influenced by Kant) publishing today. PurplePlatypus 07:44, 16 December 2006 (UTC)
I think your view is quite common among deontologists, Ashmodai, and so the gedankenexperiment is not very interesting in a deontological context. It's only inside of a consequentialist ethical system that it poses interesting problems. I don't think that represents cultural bias, though. The debate between deontologists and consequentialists goes back much further than Kant and is found in some form in every culture. Kragen Javier Sitaker (talk) 19:29, 25 September 2010 (UTC)

Joshua Greene[edit]

Who is this Joshua Greene? Is he sufficiently well known to merit about 20% of this article? Seems like self promotion to me.

I agree; that part should be minimized; it certainly shouldn't be on the top of the entry as it is.


Removed the following text from the first paragraph:

This approach to the problem has been created and popularized by Joshua Greene during his postdoc career at Princeton University. It supports a dualistic framework for the formation of moral thought characterized by emotional responses in contrast and interplay with a cognitive response. Greene and his supporters suggest that this dualism is derived from the evolutionary background of human moral and social behavior.

This researcher seems to have been given too much prominence on this topic - he doesn't even have a wikipedia article. If he is in fact very eminent, belonging with Philippa Foot, Judith Jarvis Thomson and Peter Unger, he can be added back here. --Gargletheape 04:16, 31 December 2006 (UTC)

Seems to me that there's a bunch of garbage here on Dr. Robert Jacobson's introduction of the President case. I'd flag it for content, but I don't know how; and I don't feel comfortable just deleting it.

This problem is raised by virtually every undergraduate who reads the trolley problem, and results from a misunderstanding of the case. I suggest deletion.

My solution[edit]

Nobly, instead of switching, jump into the track yourself. —The preceding unsigned comment was added by 70.184.32.37 (talk) 21:47, 15 May 2007 (UTC).

The mad philosopher, seeing your noble deed, snipes you before you get to the tracks. Now you're dead and you cannot help. -- 12.116.162.162 15:46, 1 November 2007 (UTC)
Which is why this is such a silly question. I take out my own sniper rifle hit the mad philosopher and then clip the rope tying the up the five people. Nickjost (talk) 16:03, 18 January 2008 (UTC)
You live a happy life comfortable in the knowledge that you did the right thing. A pandemic flu breaks out shortly thereafter, killing thousands. You are presented with a time machine, capture the first person that had the infection, and find out he was supposed to die by being hit by a train in the original timeline, and tie them to some train tracks. To make sure that the conductor hits him, five volunteers from your time agree to be tied to the other set of tracks. Something about this particular bridge and these particular train tracks strikes you as familiar... until you realize, with horror, your fatal error... —Preceding unsigned comment added by 173.17.184.142 (talk) 22:05, 3 December 2010 (UTC)


So instead of running them over, you quarantine them until a cure is found. The end.--75* 19:07, 26 April 2013 (UTC)

Sources[edit]

I've just placed some inline source links to the best of my ability. My sense is that many sources apply to larger sections of content than I made them point to, so of course if anyone knows better.... have at it. 67.36.192.234 (talk) 20:44, 3 March 2008 (UTC)

A Buddhist solution[edit]

Based on my understanding of the Buddhist perspective the person facing the quandry is the most important part of the moral equation.

The basic background is that whoever dies, it is the result of their own karma from past negative actions towards others. This means that the ultimate result is actually beyond the control of the person throwing the switch, who will generally also be driven by their own accumulated karma and inclinations.

What matters hugely is the emotional state, intention, vows, and spiritual level of the person throwing the switch.

If the switch is thrown or not thrown with indifference to the fates of the people effected or hatred then this merely adds to the net suffering of the world since the indifference or hatred has transformed any act into a negative one that will eventually manifest in the future life of the person throwing the switch.

If the switch is thrown or not thrown with a great deal of compassion for the fates of all involved and an intention to benefit them altruistically in future lives then any action can become positive.

If the person throwing the switch holds vows of personal liberation but is not compassionately dedicated to the benefit of others then the effect of throwing the switch should be net negative since the vow of not killing generates great positive karma which will be undermined by a decision.

If the person is an enlightened Bodhisattva with great clarivoyance and insight they can make the ultimate utilitarian decision and choose which ever outcome will result in least suffering based on the internal dispositions and karmas of the people on the track. For instance if the five had the intention to murder and the one was dedicated selflessly to helping others the Bodhisattva must choose the one over the five because the five if they live would only generate net suffering for themselves and others. In fact it is a Bodhisattva's vow to make that choice and not the ordinary utilitarian choice, even when it violates a monk's vow not to kill. —Preceding unsigned comment added by 124.169.103.48 (talk) 23:10, 15 October 2008 (UTC)

The person you are describing doesn't exist. It's my understanding that the Buddhist whose answer we're both considering would point out that a philosophy that isn't immediately available to the person pulling the switch, who doesn't have omniscience enough to weigh the fates of the six people on the tracks, is not a philosophy worth considering, ever. It's also my understanding that, underneath our considering what a Buddhist might or might not think, there is a philosophy that is immediately available to the man pulling the switch. You described it as karma, and I think that's a more than adequate description. But, I think there's some value in considering what you would say to the man or woman making the decision, if that man or woman weren't a Buddhist. From a Buddhist perspective, of course. —Preceding unsigned comment added by 173.17.184.142 (talk) 22:17, 3 December 2010 (UTC)

Flipping the switch[edit]

The problem changes a bit if we suppose it to be one's job to flip switches for the company - then one bears responsibility for whatever position the switch is in. It also changes if the switches can be controlled remotely - particularly if that remote control were in the trolley car and one were the driver --JimWae (talk) 04:36, 27 October 2008 (UTC)

Seldom do we know the future consequences as clearly as stated in the scenario of this problem --JimWae (talk) 04:38, 27 October 2008 (UTC)


The "ferry dilemma" in The Dark Knight seems to be related to this trolley dilemma. Does anyone know a general term for these kinds of dilemmas? Both also seem to ignore the possibility that the Evil daemon might be lying - you might blow up your own ferry, or flipping the switch might kill 5 instread of 1 --JimWae (talk) 05:55, 13 January 2009 (UTC)

What the dilemmas do show is that even when all the consequences are given (which is never the case in real life), consequentialism does not answer every moral question we would raise in those circumstances.--JimWae (talk) 20:01, 13 September 2011 (UTC)

Who do you let live, Steve Jobs or 5 welfare abusing serial child rapists?[edit]

What if the 5 people were all welfare abusing serial child rapists whereas the one guy on the other track was Steve Jobs or a brilliant scientist on the verge of finding the cure for cancer? What's the purpose of ethics, to maximize the benefit to society, protect the rights of every individual, or simply to best minimize remorse by answering to crude human psychology that always rationalizes that 5 people dying must be worse than 1 person dying? —Preceding unsigned comment added by Exander (talkcontribs) 06:26, 6 February 2009 (UTC)

This is not a forum for discussion of the subject itself.130.18.243.137 (talk) 06:52, 14 January 2011 (UTC)
The assumption is that you don't know either of 5 or 1 people. Mehranshargh (talk) 22:45, 15 March 2012 (UTC)

The Fat Man - issue[edit]

"Resistance to this course of action seems strong; most people who approved of sacrificing one to save five in the first case do not approve in the second sort of case. This has led to attempts to find a non-relevant moral distinction between the two cases."

Shouldn't this be 'attempts to find a relevant moral distinction', rather than a non-relevant distinction? As the two cases are otherwise the same in terms of outcome, we want to find a relevant moral distinction that explains why the majority of people have different intuitions about them. Power nap (talk) 09:33, 19 March 2009 (UTC)

I've done the edit. Please discuss if you want to change it back!!Power nap (talk) 01:53, 15 April 2009 (UTC)

Transplant/medical ethics[edit]

Do you think it's worth mentioning that in real life, western medical ethics would specifically forbid one option? It wouldn't alter the validity as a thought experiment.- cyclosarin (talk) 07:50, 10 May 2009 (UTC)

This was already solved by the Universal Declaration of Human Rights. The "transplant" scenario is currently backwards Nazi philosophy. Extemporaneous Wiki entry. Aldo L (talk) 13:38, 24 March 2011 (UTC)
In real life it's still highly improbable to do 5 transplants in a row. On a separate note, why the "brilliant transplant surgeon" does not use 1 of the dying patient's organs to save the 4 other and let the innocent traveler go?! Mehranshargh (talk) 22:50, 15 March 2012 (UTC)

Weasel Words[edit]

Where are the weasel words? —Dromioofephesus (talk) 05:55, 19 February 2010 (UTC)

I suspect the weasel words are the following:
some non-utilitarians may also accept the view…. Opponents might assert that, since moral wrongs are already in place… Additionally, opponents may point to the incommensurability of human lives.
It might also be justifiable…
My inner Wikipedian is crying out, “Which non-utilitarians? Do they accept the view or don’t they? Are these opponents who might advocate non-participation purely imaginary, or are they real? Who are they? Do people actually point to the incommensurability of human lives, or is that a hypothetical strawman being used to attack hypothetical incommensurabilists? Anything might be justifiable; is there a school of thought that argues that the following ellipsis is justifiable?”
Unfortunately I don’t have enough of a background in ethical philosophy to answer these questions myself, or I’d fix the article and remove the “weasel words” tag. However, I assert that anyone who reads this article and has the relevant knowledge has a utilitarian obligation to edit the article so that the trolley runs over and kills the weasel words.
Wikipedia, not being written in the voice of any particular author, has little tolerance for descriptions of what hypothetical people might believe as logical consequences of premises they hypothetically hold, because that style of discourse has far too much scope for rhetorical abuse in the hands of effectively anonymous authors discussing contentious matters such as Scientology, the Armenian Genocide, or the Gaza Strip. There are an infinite number of plausible chains of reasoning by which one can plausibly derive odious moral consequences from the stated beliefs of one’s ideological opponents. Consequently this sort of thing is frowned upon, because it tends to lead to unproductive edit wars.
However, it would be entirely justifiable to say, “Thomson points out that a hypothetical utilitarian could reason that taking an action in the situation assumes responsibility for the consequences,” assuming she does, of course. The issue is the use of the “some say” device (in this case, worse: “some could conceivably say”) used by unethical journalists to inject their own opinions into ostensibly factual articles, cloaked in a thin veil of objectivity. I don’t think it’s being used that way here, but some clarification would help.
I originally posted this as (part of) a comment on a Crooked Timber post. Kragen Javier Sitaker (talk) 19:13, 25 September 2010 (UTC)

New Article[edit]

I suggest that this article should become part of a new one entitled "Philosophers with too much time on their hands."198.179.227.59 (talk) 00:56, 3 June 2010 (UTC)


No thanks. --75* 19:27, 26 April 2013 (UTC)

The Fat Man[edit]

Has anyone considered that maybe a significant reason why people wouldn't push the fat man in front of the trolley is because they're not convinced that would be enough to stop it? Or perhaps they're unsure as to whether they're able to push the fat man precisely enough to land directly in front of the trolley. We're not told how far he is from the trolley's path. We're not sure how much effort it would take to push him in front of the trolley. I mean, fat people are difficult to push. We'd have to use a lot of strength and get the timing just right. This case is not just a matter of ethics; it's also a matter of practicality and of our confidence as to whether we could actually stop the trolley using the fat man. Surely I'm not the only one who has noticed the practical problems with this scenario.

The problem is not meant to be 100% practical, it represents a (semi-realistic) situation so the person reading it can identify with the dilemma. For this type of question you must assume the facts as they are presented to narrow down the choice to the moral fundamentals of the problem. —Preceding unsigned comment added by 86.135.246.46 (talk) 11:01, 18 September 2010 (UTC)
Assuming the facts as they are presented, still leaves a lot of cognitive dissonance, for exactly the reason he's pointed out. The mystery is to where this resistance to the fat man scenario comes from, despite similarities in consequences, and I think his answer to that question is worth considering. I think this problem is an excellent case study about how the very methodology we use to examine moral problems, is capable of obfuscating nonetheless valid moral points. The temptation on my part is to say that the fat man scenario should be re-written, but the reasons why I feel it should be re-written add a lot of value to the problem, and those would be lost if the fates of the train and its passengers after the fat man was pushed in front of it were made more certain. —Preceding unsigned comment added by 173.17.184.142 (talk) 22:30, 3 December 2010 (UTC)
Judy Collins transplant situation is similar, and gets a similar response. 130.18.243.137 (talk) 06:55, 14 January 2011 (UTC)
Yes, I agree with the OP. Even though the researcher may tell the subject that he must assume that the fat man will stop the trolley, the subject is considering his response as if it were a real life scenario, and surely he'll take these doubts into account. Branchc (talk) 19:57, 6 February 2014 (UTC)
The scenarios are ONLY equivalent if you remove all the practical differences and abstract them all back to: "you're forced to save 5 and kill 1 or kill 1 and save 5". If you instinctively have a problem with the "pushing the fat man" scenario, might it not have something to do witht the practical impossibility of pushing a massive man? He presumably weighs a lot more than 5 men; since the scenario specifies that the "trolly" definitely will kill 5 people and definitely be stopped by one massive man. Assuming the men are big men weighing say 100kg including clothes etc, that means the fat man must weight a lot more than 500kg for him to stop it. I don't think I could push someone weighing 200kg against their will. So either you're adding superpowers to the scenario (raising the question why I cannot use my super strength to stop the trolly with my bare hands instead? The man must weight at least as much as the trolley if not more to guarantee he will stop it.) or you agree that the man might not stop the trolley at all and you'll end up murdering an innocent bystander AND killing 5 guys that willingly accepted the risk of being on the track. Qvasi (talk) 13:43, 29 May 2014 (UTC)

Instances in real life[edit]

Is it useful to give examples in real life of situations the same (or very similar) to this problem? The one I am thinking of is a journalist who was visiting an embattled town in a civil war and challenged by a sniper to 'save' one of 2 civilians in his sights or stand by and watch both die from the sniper's bullets. In this case the journalist walked away. Obviously this would be fraught with proof of a certain situation and arguments as to if it is the same dilemma... —Preceding unsigned comment added by 86.135.246.46 (talk) 10:56, 18 September 2010 (UTC)

I have a recent one. An article on Dick Cheney a few years ago said that he would have made the decision to shoot down the 4th hijacked passenger plane in order to save lives. (As it turned out, the plane went down on its own.) Someone with a few minutes could go on Google or Bing and find the story, then post a couple sentences in the article. Hanxu9 (talk) 17:13, 29 May 2012 (UTC)
Wikipedia guidelines would call that original research and disallow it. The appropriate examples to include might be cases in which a commentator explicitly compares the situation to the trolley problem. Blue Rasberry (talk) 22:49, 29 May 2012 (UTC)

The fat villian[edit]

The fat villian is a problem that I've never heard discussed anywhere but on this article and it's completely unsourced. Now, I know retributivists are horribly flustered by the suggestion that human life has some sort of intrinsic worth greater than the pleasure that they get out of seeing other humans die, but for the purpose of this thought experiment, let's assume that the value of all lives involved is equal and nonnegative. The question of your personal valuation of other human beings life is not relevant to the discussion.130.18.131.194 (talk) 00:50, 24 April 2011 (UTC)

Test it in real life[edit]

Somebody should test this in real life. Get a real trolley and train people to operate it. Make them think they are qualifying for a job operating the trolley. Then let them do a test run and put real people in the path of the trolley. The operator will be forced to make choice, divert the trolley or not. Either way the people in the way will be pulled to safety. I would love to see the difference between the percentage of people who SAY they will divert the trolley and the percentage of people who actually DO it. Should be fun.--RaptorHunter (talk) 23:58, 10 June 2011 (UTC)

minor problem with clarity[edit]

The Overview makes reference to a 'mad philosopher' as if the reader should already be familiar with who this, but it was never explained. — Preceding unsigned comment added by 124.170.35.138 (talk) 10:15, 17 January 2012 (UTC)

Set up archiving[edit]

I am setting up archiving on this page. There are too many settled discussions here to navigate the talk page easily and besides that there is a lot of forum discussion which I am going to delete outright. I set up a bot to do the archiving just so that in the future the archives will already be bot-compatible. Blue Rasberry (talk) 17:04, 4 March 2012 (UTC)