Ultimatum game

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Erel Segal (talk | contribs) at 11:05, 6 October 2016 (removed Category:Game theory; added Category:Non-cooperative games using HotCat). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Extensive form representation of a two proposal ultimatum game. Player 1 can offer a fair (F) or unfair (U) proposal; player 2 can accept (A) or reject (R).

The ultimatum game is a game in economic experiments. The first player (the proposer) receives a sum of money and proposes how to divide the sum between the proposer and the other player. The second player (the responder) chooses to either accept or reject this proposal. If the second player accepts, the money is split according to the proposal. If the second player rejects, neither player receives any money. The game is typically played only once so that reciprocation is not an issue.

Equilibrium analysis

For illustration, we will suppose there is a smallest division of the good available (say 1 cent). Suppose that the total amount of money available is x.

The first player chooses some amount p he will keep for himself in the interval [0,x], the second player will then receive x-p. The second player chooses some function f: [0, x] → {"accept", "reject"} (i.e. the second chooses which divisions to accept and which to reject). We will represent the strategy profile as (p, f), where p is the proposal and f is the function. If f(p) = "accept" the first receives p and the second xp, otherwise both get zero.

(p, f(p)) is a Nash equilibrium of the ultimatum game if f(p) = "accept" and there is no y > p such that f(y) = "accept" (i.e. player 2 would reject all proposals in which player 1 receives more than p). The first player would not want to unilaterally increase his/her demand since the second would reject any higher demand. The second would not want to reject the demand, since he/she would then get nothing.

There is one other Nash equilibrium where p = x and f(y) = "reject" for all y>0 (i.e. the second rejects all demands that gives the first any amount at all). Here both players get nothing, but neither could get more by unilaterally changing his/her strategy.

However, only one of these Nash equilibria satisfies a more restrictive equilibrium concept, subgame perfection. Suppose that the first demands a large amount that gives the second some (small) amount of money. By rejecting the demand, the second is choosing nothing rather than something. So, it would be better for the second to choose to accept any demand that gives him/her any amount whatsoever. If the first knows this, he/she will give the second the smallest (non-zero) amount possible.[1]

Experimental results

When carried out between members of a shared social group (e.g., a village, a tribe, a nation, humanity)[2] people offer "fair" (i.e., 50:50) splits, and offers of less than 30% are often rejected.[3]

One limited study on monozygotic and dizygotic twins claims that genetic variation can have an effect on reactions to unfair offers, though the study failed to employ actual controls for environmental differences.[4] It has also been found that delaying the responder's decision makes people accept "unfair" offers more often.[5][6][7] Common chimpanzees behaved similarly to human by proposing fair offers in one version of the ultimatum game involving direct interaction between the chimpanzees.[8] However, another study also published in November 2012 showed that both kinds of chimpanzees, common chimpanzees and bonobos did not reject unfair offers, using a mechanical apparatus.[9] As of February 2015, bonobos have not been studied using the protocol involving direct interaction.

Explanations

The highly mixed results (along with similar results in the Dictator game) have been taken to be both evidence for and against the so-called "Homo economicus" assumptions of rational, utility-maximizing, individual decisions. Since an individual who rejects a positive offer is choosing to get nothing rather than something, that individual must not be acting solely to maximize his economic gain, unless one incorporates economic applications of social, psychological, and methodological factors (such as the observer effect).[citation needed] Several attempts have been made to explain this behavior. Some suggest that individuals are maximizing their expected utility, but money does not translate directly into expected utility.[10] Perhaps individuals get some psychological benefit from engaging in punishment or receive some psychological harm from accepting a low offer. It could also be the case that the second player, by having the power to reject the offer, uses such power as leverage against the first player, thus motivating him to be fair.[citation needed]

The classical explanation of the ultimatum game as a well-formed experiment approximating general behaviour often leads to a conclusion that the rational behavior in assumption is accurate to a degree, but must encompass additional vectors of decision making.[citation needed] Behavioral economic and psychological accounts suggest that second players who reject offers less than 50% of the amount at stake do so for one of two reasons. An altruistic punishment account suggests that rejections occur out of altruism: people reject unfair offers to teach the first player a lesson and thereby reduce the likelihood that the player will make an unfair offer in the future. Thus, rejections are made to benefit the second player in the future, or other people in the future. By contrast, a self-control account suggests that rejections constitute a failure to inhibit a desire to punish the first player for making an unfair offer. Morewedge, Krishnamurti, and Ariely (2014) found that intoxicated participants were more likely to reject unfair offers than sober participants.[11] As intoxication tends to exacerbate decision makers' prepotent response, this result provides support for the self-control than the altruistic punishment account. Other research from social cognitive neuroscience supports this finding.[12]

However, several competing models suggest ways to bring the cultural preferences of the players within the optimized utility function of the players in such a way as to preserve the utility maximizing agent as a feature of microeconomics. For example, researchers have found that Mongolian proposers tend to offer even splits despite knowing that very unequal splits are almost always accepted.[13] Similar results from other small-scale societies players have led some researchers to conclude that "reputation" is seen as more important than any economic reward.[13] Others have proposed the social status of the responder may be part of the payoff.[14] Another way of integrating the conclusion with utility maximization is some form of inequity aversion model (preference for fairness). Even in anonymous one-shot settings, the economic-theory suggested outcome of minimum money transfer and acceptance is rejected by over 80% of the players.[citation needed]

An explanation which was originally quite popular was the "learning" model, in which it was hypothesized that proposers’ offers would decay towards the sub game perfect Nash equilibrium (almost zero) as they mastered the strategy of the game; this decay tends to be seen in other iterated games.[citation needed] However, this explanation (bounded rationality) is less commonly offered now, in light of subsequent empirical evidence.[15]

It has been hypothesised (e.g. by James Surowiecki) that very unequal allocations are rejected only because the absolute amount of the offer is low.[citation needed] The concept here is that if the amount to be split were ten million dollars a 90:10 split would probably be accepted rather than spurning a million-dollar offer. Essentially, this explanation says that the absolute amount of the endowment is not significant enough to produce strategically optimal behaviour. However, many experiments have been performed where the amount offered was substantial: studies by Cameron and Hoffman et al. have found that higher stakes cause offers to approach closer to an even split, even in a 100 USD game played in Indonesia, where average per-capita income is much lower than in the United States. Rejections are reportedly independent of the stakes at this level, with 30 USD offers being turned down in Indonesia, as in the United States, even though this equates to two weeks' wages in Indonesia. However, 2011 research with stakes of up to 40 weeks' wages in India showed that "as stakes increase, rejection rates approach zero".[16]

Neurological explanations

Generous offers in the ultimatum game (offers exceeding the minimum acceptable offer) are commonly made. Zak, Stanton & Ahmadi (2007)[17] showed that two factors can explain generous offers: empathy and perspective taking.[clarification needed] They varied empathy by infusing participants with intranasal oxytocin or placebo (blinded). They affected perspective-taking by asking participants to make choices as both player 1 and player 2 in the ultimatum game, with later random assignment to one of these. Oxytocin increased generous offers by 80% relative to placebo. Oxytocin did not affect the minimum acceptance threshold or offers in the dictator game (meant to measure altruism). This indicates that emotions drive generosity.

Rejections in the ultimatum game have been shown to be caused by adverse physiologic reactions to stingy offers.[18] In a brain imaging experiment by Sanfey et al., stingy offers (relative to fair and hyperfair offers) differentially activated several brain areas, especially the anterior insular cortex, a region associated with visceral disgust. If Player 1 in the ultimatum game anticipates this response to a stingy offer, they may be more generous.

An increase in rational decisions in the game has been found among experienced Buddhist meditators. fMRI data show that meditators recruit the posterior insular cortex (associated with interoception) during unfair offers and show reduced activity in the anterior insular cortex compared to controls.[19]

People whose serotonin levels have been artificially lowered will reject unfair offers more often than players with normal serotonin levels.[20]

This is true whether the players are on placebo or are infused with a hormone that makes them more generous in the ultimatum game.[21][22]

People who have ventromedial frontal cortex lesions were found to be more likely to reject unfair offers.[23] This was suggested to be due to the abstractness and delay of the reward, rather than an increased emotional response to the unfairness of the offer.[24]

Evolutionary game theory

Other authors have used evolutionary game theory to explain behavior in the ultimatum game.[25] Simple evolutionary models, e.g. the replicator dynamics, cannot account for the evolution of fair proposals or for rejections.[citation needed] These authors have attempted to provide increasingly complex models to explain fair behavior.

Sociological applications

The ultimatum game is important from a sociological perspective, because it illustrates the human unwillingness to accept injustice. The tendency to refuse small offers may also be seen as relevant to the concept of honour.

The extent to which people are willing to tolerate different distributions of the reward from "cooperative" ventures results in inequality that is, measurably, exponential across the strata of management within large corporations. See also: Inequity aversion within companies.

Some see the implications of the ultimatum game as profoundly relevant to the relationship between society and the free market, with Prof. P.J. Hill, (Wheaton College, Illinois) saying:

I see the [ultimatum] game as simply providing counter evidence to the general presumption that participation in a market economy (capitalism) makes a person more selfish.[26]

History

The first ultimatum game was developed in 1982 as a stylized representation of negotiation, by Güth, Schmittberger, and Schwarze.[27] It has since become a popular economic experiment, and was said to be "quickly catching up with the Prisoner's Dilemma as a prime showpiece of apparently irrational behavior" in a paper by Martin Nowak, Karen M. Page, and Karl Sigmund.[28]

Variants

In the "competitive ultimatum game" there are many proposers and the responder can accept at most one of their offers: With more than three (naïve) proposers the responder is usually offered almost the entire endowment[29] (which would be the Nash Equilibrium assuming no collusion among proposers).

In the "ultimatum game with tipping", a tip is allowed from responder back to proposer, a feature of the trust game, and net splits tend to be more equitable.[30]

The "reverse ultimatum game" gives more power to the responder by giving the proposer the right to offer as many divisions of the endowment as they like. Now the game only ends when the responder accepts an offer or abandons the game, and therefore the proposer tends to receive slightly less than half of the initial endowment.[31]

Incomplete information ultimatum games: Some authors have studied variants of the ultimatum game in which either the proposer or the responder has private information about the size of the pie to be divided.[32][33] These experiments connect the ultimatum game to principal-agent problems studied in contract theory.

The pirate game illustrates a variant with more than two participants with voting power, as illustrated in Ian Stewart's "A Puzzle for Pirates".[34]

See also

Notes

  1. ^ Technically, making a zero offer to the responder and accepting this offer is also a Nash Equilibrium, as the responder's threat to reject the offer is no longer credible, since he/she now gains nothing (materially) by refusing the zero amount offered. Normally, when a player is indifferent between various strategies, the principle in Game Theory is that the strategy with an outcome which is Pareto optimally better for the other players is chosen (as a sort of tie-breaker to create a unique NE). However, it is generally assumed that this principle should not apply to ultimatum game players offered nothing; they are instead assumed to reject the offer although accepting it would be an equally subgame perfect NE. For instance, the University of Wisconsin summary: Testing Subgame Perfection Apart From Fairness in Ultimatum Games from 2002 admits the possibility that the proposer may offer nothing but qualifies the subgame perfect NE with the words (almost nothing) throughout the Introduction.
  2. ^ Sanfey, Alan (13 June 2003). "The Neural Basis of Economic Decision-Making in the Ultimatum Game". Science. 300 (5626): 1755–1758. doi:10.1126/science.1082976. JSTOR 3834595. PMID 12805551. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
  3. ^ See Joseph Henrich et al. (2004) and Oosterbeek et al. (2004).
  4. ^ http://www.pnas.org/content/105/10/3721.full.pdf+html
  5. ^ Bosman, Ronald; Sonnemans, Joep; Zeelenberg, Marcel Zeelenberg (2001). "Emotions, rejections, and cooling off in the ultimatum game". Unpublished manuscript, University of Amsterdam.
  6. ^ See Grimm and Mengel (2011)
  7. ^ Oechssler, Jörg; Roider, Andreas; Schmitz, Patrick W. (2015). "Cooling Off in Negotiations: Does it Work?". Journal of Institutional and Theoretical Economics JITE. 171 (4): 565–588. doi:10.1628/093245615X14307212950056.
  8. ^ Proctor, Darby (2013). "Chimpanzees play the ultimatum game". PNAS. 110 (6): 2070–2075. doi:10.1073/pnas.1220806110. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
  9. ^ Kaiser, Ingrid (2012). "Theft in an ultimatum game: chimpanzees and bonobos are insensitive to unfairness". Biology Letters. 8 (6): 942–945. doi:10.1098/rsbl.2012.0519. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
  10. ^ See Bolton (1991), and Ochs and Roth, A. E. (1989).
  11. ^ Morewedge, Carey K.; Krishnamurti, Tamar; Ariely, Dan (2014-01-01). "Focused on fairness: Alcohol intoxication increases the costly rejection of inequitable rewards". Journal of Experimental Social Psychology. 50: 15–20. doi:10.1016/j.jesp.2013.08.006.
  12. ^ Tabibnia, Golnaz; Satpute, Ajay B.; Lieberman, Matthew D. (2008-04-01). "The Sunny Side of Fairness Preference for Fairness Activates Reward Circuitry (and Disregarding Unfairness Activates Self-Control Circuitry)". Psychological Science. 19 (4): 339–347. doi:10.1111/j.1467-9280.2008.02091.x. ISSN 0956-7976. PMID 18399886.
  13. ^ a b Mongolian/Kazakh study conclusion from University of Pennsylvania.
  14. ^ Social Role in the Ultimate Game
  15. ^ A forthcoming paper[when?] "On the Behavior of Proposers in Ultimatum Games" Journal of Economic Behaviour and Organization has the thesis that learning will not cause NE-convergence: see the abstract.
  16. ^ Andersen, Steffen; Ertaç, Seda; Gneezy, Uri; Hoffman, Moshe; List, John A (2011). "Stakes Matter in Ultimatum Games". American Economic Review. 101 (7): 3427–3439. doi:10.1257/aer.101.7.3427. ISSN 0002-8282.
  17. ^ Zak PJ, Stanton AA, Ahmadi S (2007), Oxytocin Increases Generosity in Humans. PloSONE 2(11):e1128. [1]
  18. ^ Sanfey, et al. (2002)
  19. ^ Kirk; et al. (2011). "Interoception Drives Increased Rational Decision-Making in Meditators Playing the Ultimatum Game". Frontiers in Neuroscience. 5:49: 49. doi:10.3389/fnins.2011.00049. PMC 3082218. PMID 21559066.{{cite journal}}: CS1 maint: unflagged free DOI (link)
  20. ^ Crockett, Molly J. (2008-06-05). "Serotonin Modulates Behavioral Reactions to Unfairness". Science. 320 (5884): 1155577. doi:10.1126/science.1155577. PMC 2504725. PMID 18535210. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
  21. ^ Neural Substrates of Decision-Making in Economic Games Scientific Journals International [2]
  22. ^ Oxytocin Increases Generosity in Humans PloSONE 2(11):e1128 [3]
  23. ^ Koenigs, Michael; Daniel Tranel (January 2007). "Irrational Economic Decision-Making after Ventromedial Prefrontal Damage: Evidence from the Ultimatum Game". Journal of Neuroscience. 27 (4): 951–956. doi:10.1523/JNEUROSCI.4606-06.2007. PMC 2490711. PMID 17251437.
  24. ^ Moretti, Laura; Davide Dragone; Giuseppe di Pellegrino (2009). "Reward and Social Valuation Deficits following Ventromedial Prefrontal Damage". Journal of Cognitive Neuroscience. 21 (1): 128–140. doi:10.1162/jocn.2009.21011. PMID 18476758.
  25. ^ See, for example, Gale et al. (1995), Güth and Yaari (1992), Huck and Oechssler (1999), Nowak & Sigmund (2000) and Skyrms (1996)
  26. ^ See The Ultimatum game detailed description as a class room plan from EconomicsTeaching.org. (This is a more thorough explanation of the practicalities of the game than is possible here.)
  27. ^ Güth et al. (1982), page 367: the description of the game at Neuroeconomics cites this as the earliest example.
  28. ^ Nowak, M. A.; Page, K. M.; Sigmund, K. (2000). "Fairness Versus Reason in the Ultimatum Game". Science. 289 (5485): 1773–1775. doi:10.1126/science.289.5485.1773. PMID 10976075.
  29. ^ Ultimatum game with proposer competition by the GameLab.
  30. ^ Ruffle (1998), p. 247.
  31. ^ The reverse ultimatum game and the effect of deadlines is from Gneezy, Haruvy, & Roth, A. E. (2003).
  32. ^ Mitzkewitz, Michael; Nagel, Rosemarie (1993). "Experimental results on ultimatum games with incomplete information". International Journal of Game Theory. 22 (2): 171–198. doi:10.1007/BF01243649. ISSN 0020-7276.
  33. ^ Hoppe, Eva I.; Schmitz, Patrick W. (2013). "Contracting under Incomplete Information and Social Preferences: An Experimental Study". The Review of Economic Studies. 80 (4): 1516–1544. doi:10.1093/restud/rdt010. ISSN 0034-6527.
  34. ^ Stewart, Ian (May 1999). "A Puzzle for Pirates" (PDF). Scientific American. 05: 98–99. Retrieved 3/11/2011. {{cite journal}}: Check date values in: |accessdate= (help)

References

  • Alvard, M. (2004). "The Ultimatum Game, Fairness, and Cooperation among Big Game Hunters". In Henrich, J.; Boyd, R.; Bowles, S.; Gintis, H.; Fehr, E.; Camerer, C. (eds.). Foundations of Human Sociality: Ethnography and Experiments in 15 small-scale societies (PDF). Oxford University Press. pp. 413–435.
  • Andersen, Steffen, Seda Ertaç, Uri Gneezy, Moshe Hoffman, and John A. List (2011). "Stakes Matter in Ultimatum Games." American Economic Review 101: 3427-39. doi:10.1257/aer.101.7.3427.
  • Bearden, J. Neil (2001). "Ultimatum Bargaining Experiments: The State of the Art".
  • Bicchieri, Cristina and Jiji Zhang (2008). "An Embarrassment of Riches: Modeling Social Preferences in Ultimatum games", in U. Maki (ed) Handbook of the Philosophy of Economics, Elsevier
  • Bolton, G.E. (1991). "A comparative Model of Bargaining: Theory and Evidence". American Economic Review. 81: 1096–1136.
  • Bosman, R., Sonnemans, J., & Zeelenberg, M. (2001). "Emotions, rejections, and cooling off in the ultimatum game". Unpublished manuscript, University of Amsterdam.
  • Gale, J., Binmore, K.G., and Samuelson, L. (1995). "Learning to be Imperfect: The Ultimatum Game". Games and Economic Behavior. 8: 56–90. doi:10.1016/S0899-8256(05)80017-X.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  • Gneezy, Haruvy, and Roth, A. E. (2003). "Bargaining under a deadline: evidence from the reverse ultimatum game" (PDF). Games and Economic Behavior. 45 (2): 347. doi:10.1016/S0899-8256(03)00151-9. Archived from the original (PDF – Scholar search) on July 31, 2004. {{cite journal}}: External link in |format= (help); Unknown parameter |deadurl= ignored (|url-status= suggested) (help)CS1 maint: multiple names: authors list (link)
  • Grimm, Veronika and F. Mengel (2011). "Let me sleep on it: Delay reduces rejection rates in Ultimatum Games'', Economics Letters, Elsevier, vol. 111(2), pages 113-115, May.
  • Güth, W., Schmittberger, and Schwarze (1982). "An Experimental Analysis of Ultimatum Bargaining". Journal of Economic Behavior and Organization. 3 (4): 367–388. doi:10.1016/0167-2681(82)90011-7.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  • Güth, W.; Yaari, M. (1992). "An Evolutionary Approach to Explain Reciprocal Behavior in a Simple Strategic Game". In U. Witt (ed.). Explaining Process and Change – Approaches to Evolutionary Economics. Ann Arbor. pp. 23–34. {{cite book}}: Unknown parameter |lastauthoramp= ignored (|name-list-style= suggested) (help)
  • Henrich, Joseph, Robert Boyd, Samuel Bowles, Colin Camerer, Ernst Fehr, and Herbert Gintis (2004). Foundations of Human Sociality: Economic Experiments and Ethnographic Evidence from Fifteen Small-Scale Societies. Oxford University Press.{{cite book}}: CS1 maint: multiple names: authors list (link)
  • Hoppe, Eva I. and Schmitz, Patrick W. (2013). "Contracting under Incomplete Information and Social Preferences: An Experimental Study." Review of Economic Studies 80: 1516-1544. doi:10.1093/restud/rdt010.
  • Huck, S.; Oechssler, J. (1999). "The Indirect Evolutionary Approach to Explaining Fair Allocations". Games and Economic Behavior. 28: 13–24. doi:10.1006/game.1998.0691. {{cite journal}}: Unknown parameter |lastauthoramp= ignored (|name-list-style= suggested) (help)
  • Mitzkewitz, Michael, and Rosemarie Nagel (1993). "Experimental results on ultimatum games with incomplete information." International Journal of Game Theory 22: 171-198. doi:10.1007/BF01243649.
  • Ochs, J. and Roth, A. E. (1989). "An Experimental Study of Sequential Bargaining". American Economic Review. 79: 355–384.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  • Oechssler, Jörg; Roider, Andreas; Schmitz, Patrick W. (2015). "Cooling Off in Negotiations: Does it Work?". Journal of Institutional and Theoretical Economics 171: 565-588. doi:10.1628/093245615X14307212950056.
  • Oosterbeek, Hessel, Randolph Sloof, and Gijs van de Kuilen (2004). "Cultural Differences in Ultimatum Game Experiments: Evidence from a Meta-Analysis". Experimental Economics. 7 (2): 171–188. doi:10.1023/B:EXEC.0000026978.14316.74.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  • Ruffle, B.J. (1998). "More is Better, but Fair is Fair: Tipping in Dictator and Ultimatum Games". Games and Economic Behavior. 23 (2): 247. doi:10.1006/game.1997.0630..
  • Sanfey; Rilling, JK; Aronson, JA; Nystrom, LE; Cohen, JD; et al. (2002). "The neural basis of economic decision-making in the ultimatum game". Science. 300 (5626): 1755–1758. doi:10.1126/science.1082976. PMID 12805551..
  • Skyrms, B. (1996). Evolution of the Social Contract. Cambridge University Press.
  • Zak, P.J., Stanton, A.A., Ahmadi, S. (2007). Brosnan, Sarah (ed.). "Oxytocin Increases Generosity in Humans" (PDF). PLoS ONE. 2 (11): e1128. doi:10.1371/journal.pone.0001128. PMC 2040517. PMID 17987115.{{cite journal}}: CS1 maint: multiple names: authors list (link) CS1 maint: unflagged free DOI (link)
  • Angela A. Stanton (2007). "Neural Substrates of Decision-Making in Economic Games" (PDF). Scientific Journals International. 1 (1): 1–64.

Further reading

External links