Jump to content

Dual process theory (moral psychology)

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 134.184.172.116 (talk) at 13:18, 25 July 2016. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

See also:Dual process theory

Dual process theory is an influential theory of human moral judgment that alleges that human beings possess emotion-based and rationally-based cognitive subsystems that compete in moral reasoning processes. Initially proposed by Jean Piaget,[1]

Assertions and influence

The dual process account asserts that human beings have two separate methods for moral reasoning. The first refers to intuitive or instinctual responses to moral violations. These responses are implicit and the factors affecting them may be consciously inaccessible.[2] Greene asserts that these responses are supported by emotional activation. The second method refers to conscious, controlled reasoning processes. These processes ignore the emotional aspects of decision making, instead focusing on maximizing gain or obtaining the most desirable overall outcome. In everyday decision making, most decisions use one or other system, but in moral dilemmas in which an individual must compromise between violating moral rules and maximizing overall good, the systems come into conflict.

Greene ties the two processes to theories of ethics existing in moral philosophy, specifically consequentalism and deontological ethics.[3] He argues that the existing tension between systems of ethics that focus on "right action" and those that focus on "best results" can be explained by the existence of the proposed dueling systems in individual human minds.

This theory of moral judgment has had influence on research in moral psychology. The original fMRI investigation[4] proposing the dual process account has been cited in excess of 2000 scholarly articles, generating extensive use of similar methodology as well as criticism. An alternative formulation of dual process theory in moral psychology may be found in.[5]

Evidence

The dual process account first grew out of fMRI experiments showing that moral dilemmas such as the trolley problem engaged areas of the brain corresponding to emotional processing when the context involved "personal" moral violations (such as direct bodily force). When the context of the dilemma was more "impersonal" (the decision maker pulls a switch rather than use bodily force) areas corresponding to working memory and controlled reasoning were engaged instead.[6] Neuropsychological evidence from lesion studies focusing on patients with damage to the ventromedial prefrontal cortex also points to a possible dissociation between emotional and rational decision processes. Damage to this area is typically associated with antisocial personality traits and impairments of moral decision making.[7] Patients with these lesions tend to show more frequent endorsement of the "utilitarian" path in trolley problem dilemmas.[8] Greene et al. claim that this shows that when emotional information is removed through context or damage to brain regions necessary to render such information, the process associated with rational, controlled reasoning dominates decision making.[9]

Another critical piece of evidence supporting the dual process account comes from reaction time data associated with moral dilemma experiments. Subjects who choose the "utilitarian" path in moral dilemmas showed increased reaction times under high cognitive load in "personal" dilemmas, while those choosing the "deontological" path remained unaffected.[10] Cognitive load in general is also found to increase the likelihood "deontological" judgment.[11]

Criticisms

Several criticisms have been leveled against the dual process account. The most common criticism asserts that the dual emotional/rational model ignores the motivational aspect of decision making in human social contexts [12][13] A more specific example of this criticism focuses on the ventromedial prefrontal cortex lesion data. Although patients with this damage display characteristically "cold-blooded" behavior in the trolley problem, they show more likelihood of endorsement of emotionally laden choices in the Ultimatum Game.[14] It is argued that moral decisions are better understood as integrating emotional, rational, and motivational information, the last of which has been shown to involve areas of the brain in the limbic system and brain stem.[15]

Other criticisms focus on the methodology of using moral dilemmas such as the trolley problem. These criticisms note the lack of affective realism in contrived moral dilemmas and their tendency to use the actions of strangers to offer a view of human moral sentiments. Paul Bloom in particular, argues that a multitude of attitudes towards the agents involved are important in evaluating an individual's moral stance, as well as evaluating the motivations that may inform those decisions.[16]

Thomas Nagel critiques it in the book Moral Tribes by suggesting that Joshua Greene is too quick to conclude utilitarianism specifically from the general goal of constructing an impartial morality; for example, he says, Kant and Rawls offer other impartial approaches to ethical questions.[17]

Robert Wright calls[18] Joshua Greene's proposal for global harmony ambitious and adds, "I like ambition!" But he also claims that people have a tendency to see facts in a way that serves their ingroup, even if there's no disagreement about the underlying moral principles that govern the disputes. "If indeed we’re wired for tribalism," Wright explains, "then maybe much of the problem has less to do with differing moral visions than with the simple fact that my tribe is my tribe and your tribe is your tribe. Both Greene and Paul Bloom cite studies in which people were randomly divided into two groups and immediately favored members of their own group in allocating resources -- even when they knew the assignment was random." Instead, Wright proposes that "nourishing the seeds of enlightenment indigenous to the world’s tribes is a better bet than trying to convert all the tribes to utilitarianism -- both more likely to succeed, and more effective if it does."

References

  1. ^ 1924, Judgment and reasoning in the child, London: Routledge & Kegan Paul, 1928.
  2. ^ Cushman, F.; Young, L.; Hauser, M. (2006). The Role of Conscious Reasoning and Intuition in Moral Judgment Testing Three Principles of Harm. Psychological Science, 17(12), 1082–1089.
  3. ^ Greene, J. D. (2008). The secret joke of Kant’s soul. In Sinnott-Armstrong (Ed.), Moral Psychology: Volume 3 (pp. 35–80). Cambridge: MIT University Press.
  4. ^ Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science (New York, N.Y.), 293(5537), 2105–8.
  5. ^ Sun, R. (2013). Moral judgment, human motivation, and neural networks. Cognitive Computation, 5(4), 566-579
  6. ^ Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science (New York, N.Y.), 293(5537), 2105–8.
  7. ^ Aaron D Boes et al (2011). "Behavioral effects of congenital ventromedial prefrontal cortex malformation". BMC Neurology 11 (151).
  8. ^ Koenigs, M., Young, L., Adolphs, R., Tranel, D., Cushman, F., Hauser, M., & Damasio, A. (2007). Damage to prefrontal cortex increases utilitarian moral judgments. Nature, 446(7138), 908–911.
  9. ^ Greene, J. D. (2007). Why are VMPFC patients more utilitarian? A dual-process theory of moral judgment explains. Trends in Cognitive Sciences, 11(8), 322–3; author reply 323–4.
  10. ^ Greene, J. D., Morelli, S. a, Lowenberg, K., Nystrom, L. E., & Cohen, J. D. (2008). Cognitive load selectively interferes with utilitarian moral judgment. Cognition, 107(3), 1144–54.
  11. ^ Trémolière, B., Neys, W. De, & Bonnefon, J.-F. (2012). Mortality salience and morality: thinking about death makes people less utilitarian. Cognition, 124(3), 379–84.
  12. ^ Moll, J., De Oliveira-Souza, R., & Zahn, R. (2008). The neural basis of moral cognition: sentiments, concepts, and values. Annals of the New York Academy of Sciences, 1124, 161–80.
  13. ^ Sun, R. (2012). Moral Judgement, Human Motivation, and Neural Networks. Cognitive Computation
  14. ^ Koenigs, M., & Tranel, D. (2007). Irrational economic decision-making after ventromedial prefrontal damage: evidence from the Ultimatum Game. The Journal of Neuroscience, 27(4), 951–6.
  15. ^ Moll, J., & de Oliveira-Souza, R. (2007). Response to Greene: Moral sentiments and reason: friends or foes? Trends in Cognitive Sciences, 2(3-4), 336–52.
  16. ^ Bloom, P. (2011). Family, community, trolley problems, and the crisis in moral psychology. The Yale Review, 99(2), 26-43.
  17. ^ Nagel, Thomas. "You Can't Learn About Morality from Brain Scans: The problem with moral psychology". New Republic. Retrieved 24 November 2013.
  18. ^ Wright, Robert (23 October 2013). "Why Can't We All Just Get Along? The Uncertain Biological Basis of Morality". The Atlantic. Retrieved 24 November 2013.