Joshua Greene (psychologist)

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Joshua Greene
Joshua Greene no Fronteiras do Pensamento São Paulo 2018 (cropped).jpg
Joshua Greene in 2018
Alma materPrinceton University
Harvard University
Known forDual process theory (moral psychology)
Scientific career
Fieldsexperimental psychology, moral psychology, neuroscience, social psychology, philosophy
InstitutionsHarvard University
ThesisThe Terrible, Horrible, No Good, Very Bad Truth About Morality and What to Do About It (2002)
Doctoral advisorDavid Lewis and Gilbert Harman
Websitewww.joshua-greene.net

Joshua David Greene is an American experimental psychologist, neuroscientist, and philosopher. He is a Professor of Psychology at Harvard University. Most of his research and writing has been concerned with moral judgment and decision-making. His recent research focuses on fundamental issues in cognitive science.[1][2]

Education and career[edit]

Greene attended high school in Fort Lauderdale, Broward County, Florida.[3] He briefly attended the Wharton School of the University of Pennsylvania before transferring to Harvard University.[4] He earned a bachelor's degree in philosophy from Harvard in 1997,[5] followed by a Ph.D. in philosophy at Princeton University under the supervision of David Lewis and Gilbert Harman. Peter Singer also served on his dissertation committee. His 2002 dissertation, The Terrible, Horrible, No Good, Very Bad Truth About Morality and What to Do About It, argues against moral-realist language and in defense of non-realist utilitarianism as a better framework for resolving disagreements.[6] Greene served as a postdoctoral fellow at Princeton in the Neuroscience of Cognitive Control Laboratory before returning to Harvard in 2006 as an assistant professor. In 2011, he became the John and Ruth Hazel Associate Professor of the Social Sciences. Since 2014, he has been a Professor of Psychology.

Dual-process theory[edit]

Greene and colleagues have advanced a dual process theory of moral judgment, suggesting that moral judgments are determined by both automatic, emotional responses and controlled, conscious reasoning. In particular, Greene argues that the "central tension" in ethics between deontology (rights- or duty-based moral theories) and consequentialism (outcome-based theories) reflects the competing influences of these two types of processes:

Characteristically deontological judgments are preferentially supposed by automatic emotional responses, while characteristically consequentialist judgments are preferentially supported by conscious reasoning and allied processes of cognitive control.[7]

In one of the first experiments to suggest a moral dual-process model,[3] Greene and colleagues showed that people making judgments about "personal" moral dilemmas (like whether to push one person in front of an oncoming trolley in order to save five others) engaged several brain regions associated with emotion that were not activated by judgments that were more "impersonal" (like whether to pull a switch to redirect a trolley from a track on which it would kill five people onto a track on which it would kill one other person instead).[8] They also found that for the dilemmas involving "personal" moral questions, those who did make the intuitively unappealing choice had longer reaction times than those who made the more emotionally pleasant decision.

A follow-up study compared "easy" personal moral questions to which subjects had fast reaction times against "hard" dilemmas (like the footbridge problem) to which they had slow reaction times.[9] When responding to the hard problems, subjects displayed increased activity in the anterior dorsolateral prefrontal cortex (DLPFC) and inferior parietal lobes—areas associated with cognitive processing—as well as the anterior cingulate cortex—which has been implicated in error detection between two confusing inputs, as in the Stroop task). This comparison demonstrated that harder problems activated different brain regions, but it didn't prove differential activity for the same moral problem depending on the answer given. This was done in the second part of the study, in which the authors showed that for a given question, those subjects who made the utilitarian choices did have higher activity in the anterior DLPFC and the right inferior parietal lobe than subjects making non-utilitarian choices.

These two studies were correlational, but others have since suggested a causal impact of emotional vs. cognitive processing on deontological vs. utilitarian judgments.[10][11][12] A 2008 study[13] by Greene showed that cognitive load caused subjects to take longer to respond when they made a utilitarian moral judgment but had no effect on response time when they made a non-utilitarian judgment, suggesting that the utilitarian thought processes required extra cognitive effort.

Greene's 2008 article "The Secret Joke of Kant's Soul"[14] argues that Kantian/deontological ethics is best understood as rationalization rather than rationalism—an attempt to justify intuitive moral judgments post-hoc. Several philosophers have written critical responses.[15][16][17]

Moral Tribes[edit]

Drawing on dual-process theory, as well as evolutionary psychology and other neuroscience work, Greene's book Moral Tribes (2013) explores how our ethical intuitions play out in the modern world.[18]

Greene posits that humans have an instinctive, automatic tendency to cooperate with others in their social group on tragedy of the commons scenarios ("me versus us"). For example, in a cooperative investment game, people are more likely to do what's best for the group when they're under time pressure or when they're primed to "go with their gut", and inversely, cooperation can be inhibited by rational calculation.[19] However, on questions of inter-group harmony ("us versus them"), automatic intuitions run into a problem, which Greene calls the "tragedy of commonsense morality". The same ingroup loyalty that achieves cooperation within a community leads to hostility between communities. In response, Greene proposes a "metamorality" based on a "common currency" that all humans can agree upon and suggests that utilitarianism—or as he calls it, "deep pragmatism"—is up to the task.[20]

Reception[edit]

Moral Tribes received multiple positive reviews.[21][22][23][24]

Thomas Nagel critiques the book by suggesting that Greene is too quick to conclude utilitarianism specifically from the general goal of constructing an impartial morality; for example, he says, Immanuel Kant and John Rawls offer other impartial approaches to ethical questions.[20]

Robert Wright calls[25] Greene's proposal for global harmony ambitious and adds, "I like ambition!" But he also claims that people have a tendency to see facts in a way that serves their ingroup, even if there's no disagreement about the underlying moral principles that govern the disputes. "If indeed we're wired for tribalism", Wright explains, "then maybe much of the problem has less to do with differing moral visions than with the simple fact that my tribe is my tribe and your tribe is your tribe. Both Greene and Paul Bloom cite studies in which people were randomly divided into two groups and immediately favored members of their own group in allocating resources—even when they knew the assignment was random." Instead, Wright proposes that "nourishing the seeds of enlightenment indigenous to the world's tribes is a better bet than trying to convert all the tribes to utilitarianism—both more likely to succeed, and more effective if it does."

Greene's metamorality of deep pragmatism has recently been criticized by Steven Kraaijeveld and Hanno Sauer for being based on conflicting arguments about moral truth.[26]

Awards and distinctions[edit]

Greene received the 2012 Stanton Prize from the Society for Philosophy and Psychology.[27]

In 2013, Greene was awarded the Roslyn Abramson Award, given annually to Harvard faculty "in recognition of his or her excellence and sensitivity in teaching undergraduates".[5]

Bibliography[edit]

  • Greene, Joshua D; Sommerville, R Brian; Nystrom, Leigh E; Darley, John M; Cohen, Jonathan D (2001). "An fMRI investigation of emotional engagement in moral judgment". Science. 293 (5537): 2105–2108. Bibcode:2001Sci...293.2105G. doi:10.1126/science.1062872. PMID 11557895. S2CID 1437941.
  • Greene, Joshua; Jonathan Haidt (2002). "How (and where) does moral judgment work?". Trends in Cognitive Sciences. 6 (12): 517–523. doi:10.1016/S1364-6613(02)02011-9. PMID 12475712. S2CID 6777806.
  • Greene, Joshua D; Nystrom, Leigh E; Engell, Andrew D; Darley, John M; Cohen, Jonathan D (2004). "The neural bases of cognitive conflict and control in moral judgment". Neuron. 44 (2): 389–400. doi:10.1016/j.neuron.2004.09.027. hdl:10983/15961. PMID 15473975. S2CID 9061712.
  • Greene, Joshua D (2008). "The Secret Joke of Kant's Soul". In Sinnott-Armstrong, Walter (ed.). Moral Psychology: The Neuroscience of Morality: Emotion, Brain Disorders, and Development. MIT Press. pp. 35–80. ISBN 978-0-262-19564-5.

See also[edit]

References[edit]

  1. ^ Cooper, Dani (August 25, 2015). "Brain turns words into complex thoughts like a computer". Australian Broadcasting Corporation.
  2. ^ Frankland, Steven M.; Greene, Joshua D. (September 15, 2015). "An architecture for encoding sentence meaning in left mid-superior temporal cortex". Proceedings of the National Academy of Sciences. 112 (37): 11732–11737. Bibcode:2015PNAS..11211732F. doi:10.1073/pnas.1421236112. PMC 4577152. PMID 26305927.
  3. ^ a b Ohlson, Kristin. "The Vexing Mental Tug-of-War Called Morality". Discover (July–August 2011). Retrieved September 6, 2015.
  4. ^ Greene, Joshua D. (2013). Moral Tribes: Emotion, Reason, and the Gap Between Us and Them. New York: Penguin Press. ISBN 9781101638675.
  5. ^ a b Manning, Colin (May 29, 2013). "Two named Abramson winners". Harvard Gazette. Retrieved September 6, 2015.
  6. ^ Greene, Joshua David (2002). The terrible, horrible, no good, very bad truth about morality and what to do about it (Thesis). CiteSeerX 10.1.1.174.5109. OCLC 54743074. S2CID 170676316.
  7. ^ Greene, Joshua D. (July 2014). "Beyond Point-and-Shoot Morality: Why Cognitive (Neuro)Science Matters for Ethics". Ethics. 124 (4): 695–726. doi:10.1086/675875. S2CID 9063016.
  8. ^ Greene, Joshua D.; Sommerville, R. Brian; Nystrom, Leigh E.; Darley, John M.; Cohen, Jonathan D. (September 14, 2001). "An fMRI Investigation of Emotional Engagement in Moral Judgment". Science. 293 (5537): 2105–2108. Bibcode:2001Sci...293.2105G. doi:10.1126/science.1062872. PMID 11557895. S2CID 1437941.
  9. ^ Greene, Joshua D.; Nystrom, Leigh E.; Engell, Andrew D.; Darley, John M.; Cohen, Jonathan D. (October 2004). "The Neural Bases of Cognitive Conflict and Control in Moral Judgment". Neuron. 44 (2): 389–400. doi:10.1016/j.neuron.2004.09.027. hdl:10983/15961. PMID 15473975. S2CID 9061712.
  10. ^ Mendez, Mario F; Anderson, Eric; Shapira, Jill S (December 2005). "An Investigation of Moral Judgement in Frontotemporal Dementia". Cognitive and Behavioral Neurology. 18 (4): 193–197. doi:10.1097/01.wnn.0000191292.17964.bb. PMID 16340391. S2CID 19276703.
  11. ^ Koenigs, Michael; Young, Liane; Adolphs, Ralph; Tranel, Daniel; Cushman, Fiery; Hauser, Marc; Damasio, Antonio (April 2007). "Damage to the prefrontal cortex increases utilitarian moral judgements". Nature. 446 (7138): 908–911. Bibcode:2007Natur.446..908K. doi:10.1038/nature05631. PMC 2244801. PMID 17377536.
  12. ^ Valdesolo, Piercarlo; DeSteno, David (June 2006). "Manipulations of Emotional Context Shape Moral Judgment". Psychological Science. 17 (6): 476–477. doi:10.1111/j.1467-9280.2006.01731.x. PMID 16771796. S2CID 13511311.
  13. ^ Greene, Joshua D.; Morelli, Sylvia A.; Lowenberg, Kelly; Nystrom, Leigh E.; Cohen, Jonathan D. (June 2008). "Cognitive load selectively interferes with utilitarian moral judgment". Cognition. 107 (3): 1144–1154. doi:10.1016/j.cognition.2007.11.004. PMC 2429958. PMID 18158145.
  14. ^ Moral psychology. Walter Sinnott-Armstrong. Cambridge, Mass.: MIT Press. ©2008-©2014. ISBN 978-0-262-33728-1. OCLC 605120795. Check date values in: |date= (help)CS1 maint: others (link)
  15. ^ Lott, Micah (October 2016). "Moral Implications from Cognitive (Neuro)Science? No Clear Route". Ethics. 127 (1): 241–256. doi:10.1086/687337. S2CID 151940241.
  16. ^ Königs, Peter (April 3, 2018). "Two types of debunking arguments". Philosophical Psychology. 31 (3): 383–402. doi:10.1080/09515089.2018.1426100. S2CID 148678250.
  17. ^ Meyers, C. D. (May 19, 2015). "Brains, trolleys, and intuitions: Defending deontology from the Greene/Singer argument". Philosophical Psychology. 28 (4): 466–486. doi:10.1080/09515089.2013.849381. S2CID 146547149.
  18. ^ Greene, Joshua (2013). Moral Tribes: Emotion, Reason, and the Gap Between Us and Them. Penguin Press. ISBN 978-1594202605.[non-primary source needed]
  19. ^ Greene, Joshua D. "Deep Pragmatism". Edge. Retrieved November 24, 2013.
  20. ^ a b Nagel, Thomas. "You Can't Learn About Morality from Brain Scans: The problem with moral psychology". New Republic. Retrieved November 24, 2013.
  21. ^ Waytz, Adam (November 2, 2013). "'Moral Tribes' by Joshua Greene". Boston Globe. Retrieved November 24, 2013.
  22. ^ "Moral Tribes: Emotion, Reason, and the Gap Between Us and Them". Kirkus Reviews. August 19, 2013. Retrieved November 24, 2013.
  23. ^ "The Brain's Way Of Dealing With 'Us' and 'Them'". Wall Street Journal. November 23, 2013.
  24. ^ Baggini, Julian (January 3, 2014). "The social animal". Financial Times.
  25. ^ Wright, Robert (October 23, 2013). "Why Can't We All Just Get Along? The Uncertain Biological Basis of Morality". The Atlantic. Retrieved November 24, 2013.
  26. ^ Kraaijeveld, Steven R.; Sauer, Hanno (July 2019). "Metamorality without Moral Truth". Neuroethics. 12 (2): 119–131. doi:10.1007/s12152-018-9378-3. S2CID 149750930.
  27. ^ "Prizes". Society for Philosophy and Psychology. Retrieved September 6, 2015.

External links[edit]

Official website