Dual process theory (moral psychology)

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Dual process theory is an influential theory of human moral judgment that posits that human beings possess two distinct cognitive subsystems that compete in moral reasoning processes: one fast, intuitive and emotionally-driven, the other slow, deliberative and less dependent on emotion. Initially proposed by Joshua Greene along with Brian Sommerville, Leigh Nystrom, John Darley, Jonathan Cohen and others,[1][2][3] the theory can be seen as a domain-specific example of more general dual process accounts in psychology. Greene has often emphasized the philosophical (and specifically ethical) implications of the theory,[4][5][6] and it has received extensive discussion in ethics.[7][8][9][10]

Core commitments of the dual process theory[edit]

The dual process account asserts that human beings have two separate methods for moral reasoning. The theory makes use of recent scientific findings about the workings of the brain to come up with a criteria for assessing our intuitions and moral judgments. If these inner workings can be revealed; then we may have less confidence in our ethical judgments.

The first method with which we make decisions involves fast, intuitive processing. These responses are implicit and the factors affecting them may be consciously inaccessible.[11] The second method refers to conscious, controlled reasoning processes. This method is less influenced by the immediate emotional aspects of decision making and instead focuses on maximizing gain or a particular conception of the good. In everyday decision making, most decisions use one or the other of these systems.

Greene hypothesizes that we respond to "personal" and "impersonal" moral dilemmas in different ways. The roots of differing responses lie in our different emotional responses.[12] "Heat of the moment" emotional reactions influence our responses to "personal" moral dilemmas but not "impersonal" moral dilemmas.

As Greene puts it:

"Characteristically deontological judgments are preferentially supported by automatic emotional responses, while characteristically consequentialist judgments are preferentially supported by conscious reasoning and allied processes of cognitive control."[6]

This theory of moral judgment has had influence on research in moral psychology. The original fMRI investigation[1] proposing the dual process account has been cited in excess of 2000 scholarly articles, generating extensive use of similar methodology as well as criticism.

Camera analogy[edit]

Greene compares our dual-process brains as a digital SLR camera which operates in two complementary modes: automatic and manual mode.[13] The automatic settings are highly efficient but not very flexible while the manual settings are the opposite.[13] He claims that human brain has a similar general design. Our brains are wired with a variety of automatic settings which allow intuitions to guide our behaviours, most of them emotional, which we may be aware of.[13] We rely on our automatic settings most of the time. On the other hand, there is a manual mode in our brains. It specialises in enabling behaviours that serve longer term goals. The operations of this system are usually conscious, and often experienced as effortful.[13] This mode of thinking requires "using explicit rules and to think explicitly about how the world works".[13]

Nevertheless, he also highlights three ways in which this analogy could be misleading. First, while a camera must be in either automatic or manual mode, human brain's automatic settings are always on. Second, the dual settings in our brains are asymmetrical dependent but a camera's dual modes can function independently of each other. Third, automatic settings of our brains can be acquired or modified through cultural learning but not necessarily be "innate" or "hard wired".[13]

Scientific evidence[edit]


Greene uses fMRI to evaluate the brain activities and responses of people confronted with different variants of the famous trolley problem in ethics.

There are 2 versions of trolley problem. They are trolley driver dilemma and footbridge dilemma presented as follows.

Trolley Driver Dilemma: “You are at the wheel of a runaway trolley quickly approaching a fork in the tracks. On the tracks extending to the left is a group of five railway workmen. On the tracks extending to the right is a single railway workman. If you do nothing the trolley will proceed to the left, causing the deaths of the five workmen. The only way to avoid the deaths of these workmen is to hit a switch on your dashboard that will cause the trolley to proceed to the right, causing the death of the single workman. Is it appropriate for you to hit the switch in order to avoid the deaths of the five workmen?[9] (Most people judge that it is appropriate to hit the switch in this case.)

Footbridge Dilemma: “A runaway trolley is heading down the tracks toward five workmen who will be killed if the trolley proceeds on its present course. You are on a footbridge over the tracks, in between the approaching trolley and the five workmen. Next to you on this footbridge is a stranger who happens to be very large. The only way to save the lives of the five workmen is to push this stranger off the bridge and onto the tracks below where his large body will stop the trolley. The stranger will die if you do this, but the five workmen will be saved. Is it appropriate for you to push the stranger onto the tracks in order to save the five workmen?[9] (Most people judge that it is not appropriate to push the stranger onto the tracks.)

Greene and his colleagues want to know why people find it appropriate to hit the switch in the switch case and not appropriate to push the stranger in the footbridge case. They investigate the brain activities and responses of people facing those cases. Firstly, they make a distinction between 2 types of moral judgments: characteristically consequentialist judgments and characteristically deontological judgments. Characteristically consequentialist judgments are the judgments that philosophers use to justify hitting the switch; they are based on consequentialist principles. Characteristically deontological judgments are the judgments that philosophers use to justify not pushing the stranger; they are based on deontological principles.

The dual process account showing that moral dilemmas following the logic of "trolleyology" engaged areas of the brain corresponding to emotional processing when the context involved "personal" moral violations (such as direct bodily force). When the context of the dilemma was more "impersonal" (the decision maker pulls a switch rather than use bodily force) areas corresponding to working memory and controlled reasoning were engaged instead.[1] This gives way to what Greene calls the Central Tension Problem: Characteristically deontological judgments are often associated with intuitive-emotional reasoning (system 1), while characteristically consequentialist judgments are often associated with conscious reasoning and cognitive control, (system 2). These 2 processes compete with each other when people make moral judgments in the context of trolley problem.

Greene points to a large body of evidence from cognitive science suggesting that inclination to deontological or consequentialist judgment depends on whether emotional-intuitive reactions or more calculated ones were involved in the judgment-making process.[14] For example, encouraging deliberation or removing time pressure leads to an increase in consequentialist response. Performing a distracting secondary task, for example, solving a math problem, increases the possibility of the individual choosing the consequentialist approach.[15] When asked to explain or justify their responses, subjects preferentially chose consequentialist principles – even for explaining characteristically deontological responses . Further evidence shows that consequentialist responses to trolley-problem-like dilemmas are associated with deficits in emotional awareness in people with alexithymia or psychopathic tendencies.[15] On the other hand, subjects being primed to be more emotional or empathetic give more characteristically deontological answers.

In addition, Greene's results show that some brain areas, such as the medial prefrontal cortex, the posterior cingulate/precuneus, the posterior superior temporal sulcus/inferior parietal lobe, and the amygdala, are associated with emotional processes. Subjects exhibited increased activity in these brain regions when presented with situations involving the use of personal force (e.g. the 'footbridge' case). The dorsolateral prefrontal cortex and the parietal lobe are 'cognitive' brain regions; subjects show increased activity in these two regions when presented with impersonal moral dilemmas.[16]

Brain lesions[edit]

Neuropsychological evidence from lesion studies focusing on patients with damage to the ventromedial prefrontal cortex also points to a possible dissociation between emotional and rational decision processes. Damage to this area is typically associated with antisocial personality traits and impairments of moral decision making.[17] Patients with these lesions tend to show more frequent endorsement of the "utilitarian" path in trolley problem dilemmas.[18] Greene et al. claim that this shows that when emotional information is removed through context or damage to brain regions necessary to render such information, the process associated with rational, controlled reasoning dominates decision making.[19]

A popular medical case, studied in particular by neuroscientist Antonio Damasio,[20] was that of American railroad worker Phineas Gage. On the 13th of September 1848, while working on a railway track in Vermont, he was involved in an accident: an "iron rod used to cram down the explosive powder shot into Gage’s cheek, went through the front of his brain, and exited via the top of his head".[21] Surprisingly, not only Gage survived, but he also went back to his normal life just in less than two months.[20] Although his physical capacities were restored, however, his personality and his character radically changed. He became vulgar and anti-social: "Where he had once been responsible and self-controlled, now he was impulsive, capricious, and unreliable".[21] Damasio wrote: "Gage was no longer Gage." [20] Moreover, also his moral intuitions were transformed. Further studies by means of neuroimaging showed a correlation between such "moral" and character transformations and injuries to the ventromedial prefrontal cortex.[22]

In his book Descartes' Error, commenting on Phineas Gage case, Damasio said that after the accident the railroad worker was able "To know, but not to feel."[20] As explained by David Edmonds, Joshua Greene thought that this could explain the difference in moral intuitions in different versions of the trolley problem: "We feel that we shouldn’t push the fat man. But we think it better to save five rather than one life. And the feeling and the thought are distinct.”[21]

Reaction times[edit]

Another critical piece of evidence supporting the dual process account comes from reaction time data associated with moral dilemma experiments. Subjects who choose the "utilitarian" path in moral dilemmas showed increased reaction times under high cognitive load in "personal" dilemmas, while those choosing the "deontological" path remained unaffected.[23] Cognitive load in general is also found to increase the likelihood of "deontological" judgment.[24] These laboratory findings are supplemented by work that looks at the decision-making processes of real world altruists in life-or-death situations.[25] These heroes overwhelming described their actions as fast and intuitive, and virtually never as carefully reasoned.

Evolutionary rationale[edit]

The dual process theory is often given an evolutionary rationale (in this basic sense, the theory is an example of evolutionary psychology).

In pre-Darwinian thinking, such as Hume'sTreatise of Human Nature’ we find speculations about the origins of morality as deriving from natural phenomena common to all humans. For instance, he mentions the "common or natural cause of our passions" and the generation of love for others represented through self-sacrifice for the greater good of the group. Hume's work is sometimes cited as an inspiration for contemporary dual process theories.[8]

Darwin's evolutionary theory gives a better descriptive process for how these moral norms have derived from evolutionary processes and natural selection.[8] For example, selective pressures favour self-sacrifice for the benefit of the group and punish those who do not. This provides a better explanation of the cost-benefit ratio for the generation of love for others as originally mentioned by Hume.

Another example of an evolutionary derived norm is justice, which is born out of the ability to detect those who cheat. Peter Singer explains that the instinct of reciprocity improved fitness for survival, therefore those who did not reciprocate were considered cheaters and cast-off from the group.[8]

Peter Singer agrees with Green that consequentialist judgements are to be favored over deontological judgements. According to him, moral constructivism search for reasonable grounds whereas deontological judgements rely on hasty and emotional responses. Singer argues our most immediate moral intuition should be challenged. A normative ethic must not be evaluated by the extent to which it matches those moral intuitions. He gives the example of our one brother and sister who secretly decide to have sex with each other using contraceptive measures. Our first intuitive reaction is a firm condemnation of incest as morally wrong. However, a consequentialist judgement brings another conclusion. As the brother and sister did not tell anyone and used contraceptive measures, the incest did not have harmful consequences. Thus, in that case, incest is not necessarily wrong.[8]

Singer relies on evolutionary theories to justify his claim. For most of our evolutionary history, human beings have lived in small groups where violence was ubiquitous. Deontological judgements linked to emotional and intuitive responses were developed by human beings as they were confronted to personal and close interactions with others. In the past century, our social organizations were altered and this type of interactions have become much less frequent. Therefore, we should rather rely on more sophisticated consequently judgements that fit better in our modern times, than deontological judgements that were useful for more rudimentary interactions.[8]

Scientific criticisms[edit]

Several scientific criticisms have been leveled against the dual process account. One asserts that the dual emotional/rational model ignores the motivational aspect of decision making in human social contexts [26][27] A more specific example of this criticism focuses on the ventromedial prefrontal cortex lesion data. Although patients with this damage display characteristically "cold-blooded" behavior in the trolley problem, they show more likelihood of endorsement of emotionally laden choices in the Ultimatum Game.[28] It is argued that moral decisions are better understood as integrating emotional, rational, and motivational information, the last of which has been shown to involve areas of the brain in the limbic system and brain stem.[29]

Other criticisms focus on the methodology of using moral dilemmas such as the trolley problem. These criticisms note the lack of affective realism in contrived moral dilemmas and their tendency to use the actions of strangers to offer a view of human moral sentiments. Paul Bloom in particular, has argued that a multitude of attitudes towards the agents involved are important in evaluating an individual's moral stance, as well as evaluating the motivations that may inform those decisions.[30]

Berker has raised three methodological worries about Greene´s empirical findings.[9] First, neural activities associated with emotional processes are not exclusively correlated with deontological judgements but can also be found in consequentialist judgements. Thus, one can argue that all moral judgements seem to involve emotional processing.  Second, Greene´s response time prediction has not born out if one considers that Greene´s study involved “easy” cases that should not be classified as dilemmas. This is because the way some cases were framed, people found one of the choices to be obviously inappropriate. Third, Greene´s criteria to classify impersonal and personal moral dilemmas do not map onto the distinction of deontological and consequentialist moral judgements. The “Lazy Susan Case” serves as a counter-example, showing that intuitive consequentialist answers can involve personal force.

Notwithstanding the above, the later criticism has been considered by Greene.

Alleged ethical implications[edit]

Greene ties the two processes to theories of ethics existing in moral philosophy, specifically consequentialism and deontological ethics.[31] He argues that the existing tension between systems of ethics that focus on "right action" and those that focus on "best results" can be explained by the existence of the proposed dueling systems in individual human minds. In particular, ethical decisions that fall under 'right action' correspond to system 1 processing, whereas 'best results' correspond to system 2. This poses problems for deontological moral theory, as it can be seen as offering 'post hoc' rationalisations for our emotional responses. Greene argues that our emotional responses are sensitive to morally irrelevant factors, such as personal force. For example, our intuitive moral judgements in trolley-cases depend on whether the use of personal force is required; In trolley scenarios, our convictions are bolstered to reach this outcome by way of flicking a switch, as opposed to deliberately pushing a helpless victim. This is because one relates more to the helpless victim. Greene proposes that this therefore vindicates consequentialism. He rejects deontology as a moral framework as it relies on morally irrelevant intuitions.

Greene's "direct route" to ethical significance[edit]

Greene firstly argues that scientific findings can help us reach interesting normative conclusions, without crossing the is/ought gap. For example, he considers the normative statement "capital juries make good judgements". Scientific findings could lead us to revise this judgement if it were found that capital juries were in fact sensitive to race, if we accept the uncontroversial normative premise that capital juries ought not be sensitive to race.[6]

Greene then states that the evidence for dual-process theory might give us reason to question judgements which are based upon moral intuitions, in cases where those moral intuitions might be based upon morally irrelevant factors. He gives the example of incestuous siblings. Intuition might tell us that this is morally wrong, but Greene suggests that this intuition is the result of incest historically being evolutionary disadvantageous. However, if the siblings take extreme precautions, such as vasectomy, in order to avoid the risk of genetic mutation in their offspring, the cause of the moral intuition is no longer relevant. In such cases, scientific findings have given us reason to ignore some of our moral intuitions, and in turn revise the moral judgements which are based upon these intuitions.[6]

Greene's "indirect route" to ethical significance[edit]

Greene is not making the claim that moral judgements based on emotion are categorically bad. His position is that the different “settings” are appropriate for different scenarios.

With regards to automatic settings, Greene says we should only rely on these when faced with a moral problem that is sufficiently “familiar” to us. Familiarity, on Greene's conception, can arise from three sources - evolutionary history, culture and personal experience. It is possible that fear of snakes, for instance, can be traced to genetic dispositions, whereas a reluctance to place one's hand on a stove is caused by previous experience on burning one's hand on a hot stove.[13]

The appropriateness of applying our intuitive and automatic mode of reasoning to a given moral problem thus hinges on how the process was formed in the first place. Shaped by trial-and-error experience, automatic settings will only function well when one has sufficient experience of the situation at hand.

In light of these considerations, Greene formulates the "No Cognitive Miracles Principle":[13]

When we are dealing with unfamiliar* moral problems, we ought to rely less on automatic settings (automatic emotional responses) and more on manual mode (concious, controlled reasoning), lest we bank on cognitive miracles.

Philosophical criticisms[edit]

Thomas Nagel has argued that Joshua Greene, in his book Moral Tribes, is too quick to conclude utilitarianism specifically from the general goal of constructing an impartial morality; for example, he says, Kant and Rawls offer other impartial approaches to ethical questions.[32]

Robert Wright has called[33] Joshua Greene's proposal for global harmony ambitious and adds, "I like ambition!" But he also claims that people have a tendency to see facts in a way that serves their ingroup, even if there's no disagreement about the underlying moral principles that govern the disputes. "If indeed we’re wired for tribalism," Wright explains, "then maybe much of the problem has less to do with differing moral visions than with the simple fact that my tribe is my tribe and your tribe is your tribe. Both Greene and Paul Bloom cite studies in which people were randomly divided into two groups and immediately favored members of their own group in allocating resources -- even when they knew the assignment was random." Instead, Wright proposes that "nourishing the seeds of enlightenment indigenous to the world’s tribes is a better bet than trying to convert all the tribes to utilitarianism -- both more likely to succeed, and more effective if it does."

Berker's criticisms[edit]

In a widely cited critique of Greene's work and the philosophical implications of the dual process theory, Harvard philosophy professor Selim Berker critically analyzed four possible arguments for the Greene and Singer's conclusion.[9] He labels three of them as merely rhetoric or "bad arguments", and the last one as the "The argument from irrelevant factors". According to Berker, all of them are fallacious.

Three bad arguments[edit]

The first is the “Emotions Bad, Reasoning Good” argument. According to this view our deontological intuitions are driven by emotions, while consequentialist intuitions imply abstract reasoning and therefore deontological intuitions don't have any normative force, whereas consequentialist intuitions do. Berker claims that this is question begging for two reasons. The first one is that the claim that emotionally driven intuitions are less reliable than those guided by reason is not supported by substantive further motivation, given that “there is a venerable tradition that sees emotions as an important way of discerning normative truths”.[9] The second reason is that the argument seems to rely on the assumption that deontological intuitions involve only emotional processes whereas consequentialist intuitions involve only abstract reasoning. Berker points out that there is an empirical issue with this assumption, as Greene's research[34] itself shows that consequentialist responses to personal moral dilemmas involve at least one brain region, the posterior cingulate, that is associated with emotional processes. Hence, he argues, it would be hard to justify the claim that deontological judgement are less reliable than consequentialist judgements by appealing to the role of emotions, as this line of argument would result in mere name-calling.

The second bad argument presented by Berker is “The Argument from Heuristics” and is an improved version of the previous one. In support of the claim that deontological intuitions are unreliable because emotionally driven, it is argued that just like it happens in other domains, also in the moral domains emotional processes tend to rely on fast heuristics, and thus are unreliable. According to Berker this line of thought is also flawed, because in moral reasoning, unlike in other domains, it is highly debated whether moral questions can have right and wrong moral answers are, and therefore to assume that emotional processes involved in moral judgement use heuristics is question begging. Berker also challenges the very assumption that heuristics leads to unreliable judgements, and argues that in any case, as far as we know consequentialist judgements too may rely on heuristics, given that it is highly unlikely that they could rely on an accurate and comprehensive mental calculation of all the possible outcomes. Thus, the argument is based on an implausible assumptions.[9]

The third argument is “The Argument from Evolutionary History”. It draws on the idea that our different moral response towards personal and impersonal harms is evolutionarily based. In fact, since personal violence has been known since ancient age, human developed emotional responses as innate alarm systems in order to adapt, handle and promptly respond to such situations of violence within their groups. Cases of impersonal violence, instead, do not raise the same innate alarm and therefore they leave room for a more accurate and analytical judgement of the situation. Thus, according to this argument, unlike consequentialist intuitions, emotion-based deontological intuitions are the side effects of this evolutionary adaption to the pre-existing environment. Therefore “deontological intuitions, unlike consequentialist intuitions, do not have any normative force".[9] Berker states that this is incorrect conclusion because there is no reason to think that consequentialist intuitions are not also by-products of evolution.[9] Moreover, he argues that the invitation, advanced by Singer,[8] to separate evolutionary-based moral judgements (allegedly unreliable) from those that are based on reason, is misleading because it is based on a false dichotomy.

The argument from morally irrelevant factors[edit]

Berker argued that the most promising argument from neural "is" to moral "ought" is the following.[9]

“P1. The emotional processing that gives rise to deontological intuitions responds to factors that make a dilemma personal rather than impersonal.

P2. The factors that make a dilemma personal rather than impersonal are morally irrelevant.

C1. So, the emotional processing that gives rise to deontological intuitions responds to factors that are morally irrelevant.

C2. So, deontological intuitions, unlike consequentialist intuitions, do not have any genuine normative force.”

Berker criticises both premises and the move from C1 to C2. Regarding P1, Berker is not convinced that deontological judgments are correctly characterized as merely appealing to factors that make the dilemma personal. Regarding P2, he argues that factors that make a dilemma personal or impersonal are not necessarily morally irrelevant. Eventually, Berker concludes that even if a personal/impersonal distinction is indeed morally irrelevant, this does not rule out deontological judgements as not genuinely normative. Otherwise, the same thing could be said of consequentialist judgements.

Moral Enhancement[edit]

Thomas Douglas defines moral enhancement as follows: "A person morally enhances herself if she alters herself in a way that may reasonably be expected to result in her having morally better future motives, taken in sum, than she would otherwise have had".[35] Douglas argues that moral enhancement is not always morally impermissible, as opposed to the Bioconservative Thesis, which states that "Even if it were technically possible and legally permissible for people to engage in biomedical enhancement, it would not be morally permissible for them to do so".[36] Douglas argues the Bioconservative thesis is predominantly based on social considerations. It argues that enhancement may only be good for the enhanced individual, but not for the others (i.e. the rest of society). For example, an enhanced individual may be more intellectual than an average human and therefore could be acquiring multiple jobs in the market which in turn diminishes the job opportunities for other people. Nevertheless, Douglas argues that morally enhancing a human would not harm others, thus the Bioconservative Thesis is false.

Getting back to his definition of moral enhancement, Douglas defines motives as "psychological- mental or neural- states or processes that will, given the absence of opposing motives, cause a person to act".[35] In this way Douglas argues the person that is morally enhanced is not necessarily moral, has a more moral character or will necessarily act more morally than the earlier un-enhanced self.[35] He argues that moral enhancement should alter certain emotions "which interfere with putative good motives (moral emotions, reasoning processes, and combinations thereof) and/or which are themselves uncontroversially bad motives".[35] For example, altering a strong aversion to a certain racial group would be a way of morally enhancing a human. One could agree that such an aversion would be uncontroversially a bad motive and so interfering with it would be for the best. The moral enhancement may be done by biological (i.e. a pill) or nonbiological means (i.e. self education). Douglas argues moral enhancement technologies will be used within 'medium term' (i.e. within centuries).

Douglas sketches a scenario to show how moral enhancement is morally permissible.[35] He demonstrates his scenario by the following assumptions. Say that Smith can undergo some biomedical intervention (i.e. a pill) that will bring him better motives after taking it. Before taking the pill, Smith would have more bad motives then after taking the pill. The pill will only alter some emotion(s) of Smith and will not have any side-effects. Also, Smith takes it voluntarily without any sign of coercion. Douglas argues that under these circumstances it would be morally permissible for Smith to morally enhance himself. He argues this by first stating that a consequentialist claim would argue that it would be morally permissible for Smith to take the pill as it would expectably bring about good consequences. Second, a non-consequentialist claim would argue that moral enhancement has some intrinsic property which would give him reason to perform it (i.e. such as the property of being an act of self-improvement). Douglas then looks at objections to his claim.

One set of the objections Douglas mentions is what he calls 'objectionable motives'.[35] The objection puts Smith's reason to enhance himself into question. It argues that Smith's best possible motive to enhance himself may not be good enough. A proponent of this objection is Michael Sandel. In line with Sandel the argument would say that the reason for Smith's enhancement is due to the fact he/she does not have sufficient acceptance of 'the given'.[35] Douglas rejects this claim by arguing that in the example above, Smith does not have reason to accept his bad motives and reject interference of his good motives. Rather, the more appropriate attitude for this case is one of non-acceptance and a desire for self-change. Furthermore, Douglas argues that opponents may argue that the enhancement restricts Smith's freedom. In this view, Smith will have less freedom to have and to act upon bad motives. Freedom in this view consists not merely in the absence of external constraints, but also internal ones. For it is only Smith's internal characteristics that would be altered by his moral enhancement.[35] In this view, "the self is divided into two parts- the true self, and a brute self that is external to the true self".[35] The enhancement would alter the brute self in such a way that it would constrain his true self, thus restricting Smiths freedom.[35] Douglas responds to this claim by arguing that if the self is divided into two parts, the enhancement would only alter the brute self. The enhancement would fundamentally be restricting the brain's emotion-generating mechanisms. He argues that it would be strange to think of subconscious mechanisms as being your true self. Rather, the enhancement would increase the freedom of Smiths true self, while diminishing his brute self. In this way, Smith would obtain more freedom to have and act upon good motives.


  1. ^ a b c Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293(5537), 2105–8.
  2. ^ Greene, J. D., Nystrom, L. E., Engell, A. D., Darley, J. M., & Cohen, J. D. (2004). The neural bases of cognitive conflict and control in moral judgment. Neuron, 44(2), 389–400.
  3. ^ Greene, Joshua D. (October 2017). "The rat-a-gorical imperative: Moral intuition and the limits of affective learning". Cognition. 167: 66–77. doi:10.1016/j.cognition.2017.03.004. ISSN 0010-0277.
  4. ^ Greene, Joshua (October 2003). "From neural 'is' to moral 'ought': what are the moral implications of neuroscientific moral psychology?". Nature Reviews Neuroscience. 4 (10): 846–850. doi:10.1038/nrn1224. ISSN 1471-003X.
  5. ^ Greene, Joshua D. (2008). Sinnott-Armstrong, W., ed. "The Secret Joke of Kant's Soul". Moral Psychology: The Neuroscience of Morality. Cambridge, MA: MIT Press: 35–79.
  6. ^ a b c d Greene, Joshua D. (2014-07-01). "Beyond Point-and-Shoot Morality: Why Cognitive (Neuro)Science Matters for Ethics". Ethics. 124 (4): 695–726. doi:10.1086/675875. ISSN 0014-1704.
  7. ^ Railton, Peter (July 2014). "The Affective Dog and Its Rational Tale: Intuition and Attunement". Ethics. 124 (4): 813–859. doi:10.1086/675876. ISSN 0014-1704.
  8. ^ a b c d e f g Singer, Peter (October 2005). "Ethics and Intuitions". The Journal of Ethics. 9 (3–4): 331–352. doi:10.1007/s10892-005-3508-y. ISSN 1382-4554.
  9. ^ a b c d e f g h i j Berker, Selim (September 2009). "The Normative Insignificance of Neuroscience". Philosophy & Public Affairs. 37 (4): 293–329. doi:10.1111/j.1088-4963.2009.01164.x. ISSN 0048-3915.
  10. ^ Bruni, Tommaso; Mameli, Matteo; Rini, Regina A. (2013-08-25). "The Science of Morality and its Normative Implications". Neuroethics. 7 (2): 159–172. doi:10.1007/s12152-013-9191-y. ISSN 1874-5490.
  11. ^ Cushman, F.; Young, L.; Hauser, M. (2006). The Role of Conscious Reasoning and Intuition in Moral Judgment Testing Three Principles of Harm. Psychological Science, 17(12), 1082–1089.
  12. ^ Singer, Peter (2005-10-01). "Ethics and Intuitions". The Journal of Ethics. 9 (3): 331–352. doi:10.1007/s10892-005-3508-y. ISSN 1572-8609.
  13. ^ a b c d e f g h Greene, Joshua D. (2014-07-01). "Beyond Point-and-Shoot Morality: Why Cognitive (Neuro)Science Matters for Ethics". Ethics. 124 (4): 695–726. doi:10.1086/675875. ISSN 0014-1704.
  14. ^ Greene, Joshua. "Beyond Point-and-shoot morality: Why neuroscience matters for ethics". Ethics: 701–704.
  15. ^ a b Greene, Joshua D. (2015-01-01). "Beyond Point-and-Shoot Morality: Why Cognitive (Neuro)Science Matters for Ethics". The Law & Ethics of Human Rights. 9 (2). doi:10.1515/lehr-2015-0011. ISSN 2194-6531.
  16. ^ Greene, Joshua. "Beyond Point-and-Shoot morality: Why Cognitive (Neuro)science Matters for Ethics". Ethics: 701.
  17. ^ Aaron D Boes et al (2011). "Behavioral effects of congenital ventromedial prefrontal cortex malformation". BMC Neurology 11 (151).
  18. ^ Koenigs, M., Young, L., Adolphs, R., Tranel, D., Cushman, F., Hauser, M., & Damasio, A. (2007). Damage to prefrontal cortex increases utilitarian moral judgments. Nature, 446(7138), 908–911.
  19. ^ Greene, J. D. (2007). Why are VMPFC patients more utilitarian? A dual-process theory of moral judgment explains. Trends in Cognitive Sciences, 11(8), 322–3; author reply 323–4.
  20. ^ a b c d Damasio, Antonio (1994). Descartes’ Error: Emotion, Reason, and the Human Brain. New York: Grosset/Putnam.
  21. ^ a b c Edmonds, David (2014). Would You Kill the Fat Man? The Trolley Problem and What Your Answer Tells Us about Right and Wrong. Princeton, NJ: Princeton University Press. pp. 137–139.
  22. ^ Singer, Peter (2005). "Ethics and Intuitions". The Journal of Ethics. 9: 331–352.
  23. ^ Greene, J. D., Morelli, S. a, Lowenberg, K., Nystrom, L. E., & Cohen, J. D. (2008). Cognitive load selectively interferes with utilitarian moral judgment. Cognition, 107(3), 1144–54.
  24. ^ Trémolière, B., Neys, W. De, & Bonnefon, J.-F. (2012). Mortality salience and morality: thinking about death makes people less utilitarian. Cognition, 124(3), 379–84.
  25. ^ Rand, David G., and Ziv G. Epstein. "Risking your life without a second thought: Intuitive decision-making and extreme altruism." PLoS ONE 9.10 (2014): e109687.
  26. ^ Moll, J., De Oliveira-Souza, R., & Zahn, R. (2008). The neural basis of moral cognition: sentiments, concepts, and values. Annals of the New York Academy of Sciences, 1124, 161–80.
  27. ^ Sun, R. (2012). Moral Judgement, Human Motivation, and Neural Networks. Cognitive Computation
  28. ^ Koenigs, M., & Tranel, D. (2007). Irrational economic decision-making after ventromedial prefrontal damage: evidence from the Ultimatum Game. The Journal of Neuroscience, 27(4), 951–6.
  29. ^ Moll, J., & de Oliveira-Souza, R. (2007). Response to Greene: Moral sentiments and reason: friends or foes? Trends in Cognitive Sciences, 2(3-4), 336–52.
  30. ^ Bloom, P. (2011). Family, community, trolley problems, and the crisis in moral psychology. The Yale Review, 99(2), 26-43.
  31. ^ Greene, J. D. (2008). The secret joke of Kant’s soul. In Sinnott-Armstrong (Ed.), Moral Psychology: Volume 3 (pp. 35–80). Cambridge: MIT University Press.
  32. ^ Nagel, Thomas. "You Can't Learn About Morality from Brain Scans: The problem with moral psychology". New Republic. Retrieved 24 November 2013.
  33. ^ Wright, Robert (23 October 2013). "Why Can't We All Just Get Along? The Uncertain Biological Basis of Morality". The Atlantic. Retrieved 24 November 2013.
  34. ^ Greene, Joshua D.; Nystrom, Leigh E.; Engell, Andrew D.; Darley, John M.; Cohen, Jonathan D. (October 2004). "The Neural Bases of Cognitive Conflict and Control in Moral Judgment". Neuron. 44 (2): 389–400. doi:10.1016/j.neuron.2004.09.027. ISSN 0896-6273.
  35. ^ a b c d e f g h i j DOUGLAS, THOMAS (August 2008). "Moral Enhancement". Journal of Applied Philosophy. 25 (3): 228–245. doi:10.1111/j.1468-5930.2008.00412.x. ISSN 0264-3758.
  36. ^ J., Sandel, Michael (2009). The case against perfection : ethics in the age of genetic engineering. Belknap Press of Harvard University Press. ISBN 9780674019270. OCLC 910402669.