The Evolution of Cooperation
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
|
Author | Robert Axelrod |
---|---|
Language | English |
Genre | Philosophy, sociology |
Publisher | Basic Books |
Publication date | April 1984 |
Publication place | United States |
Media type | Hardback, paperback, audiobook |
Pages | 241 |
ISBN | 0-465-00564-0 |
OCLC | 76963800 |
302 14 | |
LC Class | HM131.A89 1984 |
The Evolution of Cooperation is a 1984 book by political scientist Robert Axelrod[1] that expanded a highly influential paper of the same name, and popularized the study upon which the original paper had been based. Since 2006, reprints of the book have included a foreword by Richard Dawkins and been marketed as a revised edition.
"The Evolution of Cooperation" is a 1981 paper by Axelrod and evolutionary biologist W. D. Hamilton in the scientific literature, which became the most cited publication in the field of political science.[2]
Evolution of cooperation is a general term for investigation into how cooperation can emerge and persist (also known as cooperation theory) as elucidated by the application of game theory. Traditional game theory did not explain some forms of cooperation well. The academic literature concerned with those forms of cooperation not easily handled in traditional game theory, with special consideration of evolutionary biology, largely took its modern form as a result of Axelrod's and Hamilton's influential 1981 paper and the book that followed.
Cooperation theory
Operations research
The idea that human behavior can be usefully analyzed mathematically gained great credibility following the application of operations research in World War II to improve military operations. One famous example involved how the Royal Air Force hunted submarines in the Bay of Biscay.[3] It had seemed to make sense to patrol the areas where submarines were most frequently seen. Then it was pointed out that "seeing the most submarines" depended not only on the number of submarines present, but also on the number of eyes looking; i.e., patrol density. Making an allowance for patrol density showed that patrols were more efficient – that is, found more submarines per patrol – in other areas. Making appropriate adjustments increased the overall effectiveness.
Game theory
Accounts of the success of operations research during the war, publication in 1944 of John von Neumann and Oskar Morgenstern's Theory of Games and Economic Behavior (Von Neumann & Morgenstern 1944) on the use of game theory for developing and analyzing optimal strategies for military and other uses, and publication of John William's The Compleat Strategyst, a popular exposition of game theory,[4] led to a greater appreciation of mathematical analysis of human behavior.[5]
But game theory had a little crisis: it could not find a strategy for a simple game called "The Prisoner's Dilemma" (PD) where two players have the option to cooperate for mutual gain, but each also takes a risk of being suckered.
Prisoner's dilemma
The prisoner's dilemma game[6] (invented around 1950 by Merrill M. Flood and Melvin Dresher[7]) takes its name from the following scenario: you and a criminal associate have been busted. Fortunately for you, most of the evidence was shredded, so you are facing only a year in prison. But the prosecutor wants to nail someone, so he offers you a deal: if you squeal on your associate – which will result in his getting a five-year stretch – the prosecutor will see that six months is taken off of your sentence. Which sounds good, until you learn your associate is being offered the same deal – which would get you five years.
So what do you do? The best that you and your associate can do together is to not squeal: that is, to cooperate (with each other, not the prosecutor!) in a mutual bond of silence, and do your year. But wait: if your associate cooperates, can you do better by squealing ("defecting") to get that six-month reduction? It's tempting, but then he's also tempted. And if you both squeal, oh, no, it's four and half years each. So perhaps you should cooperate – but wait, that's being a sucker yourself, as your associate will undoubtedly defect, and you won't even get the six months off. So what is the best strategy to minimize your incarceration (aside from going straight in the first place)?
To cooperate, or not cooperate? This simple question (and the implicit question of whether to trust, or not), expressed in an extremely simple game, is a crucial issue across a broad range of life. Why shouldn't a shark eat the little fish that has just cleaned it of parasites: in any given exchange who would know? Fig wasps collectively limit the eggs they lay in fig trees (otherwise, the trees would suffer). But why shouldn't any one fig wasp cheat and leave a few more eggs than her rivals? At the level of human society, why shouldn't each of the villagers that share a common but finite resource try to exploit it more than the others?[8] At the core of these and myriad other examples is a conflict formally equivalent to the Prisoner's Dilemma. Yet sharks, fig wasps, and villagers all cooperate. It has been a vexatious problem in evolutionary studies to explain how such cooperation should evolve, let alone persist, in a world of self-maximizing egoists.
Darwinian context
Charles Darwin's theory of how evolution works ("By Means of Natural Selection"[9]) is explicitly competitive ("survival of the fittest"), Malthusian ("struggle for existence"), even gladiatorial ("nature, red in tooth and claw"). Species are pitted against species for shared resources, similar species with similar needs and niches even more so, and individuals within species most of all.[10] All this comes down to one factor: out-competing all rivals and predators in producing progeny.
Darwin's explanation of how preferential survival of the slightest benefits can lead to advanced forms is the most important explanatory principle in biology, and extremely powerful in many other fields. Such success has reinforced notions that life is in all respects a war of each against all, where every individual has to look out for himself, that your gain is my loss.
In such a struggle for existence altruism (voluntarily yielding a benefit to a non-relative) and even cooperation (working with another for a mutual benefit) seem so antithetical to self-interest as to be the very kind of behavior that should be selected against. Yet cooperation and seemingly even altruism have evolved and persist, including even interspecific cooperation and naturalists have been hard pressed to explain why.
Social Darwinism
The popularity of the evolution of cooperation – the reason it is not an obscure technical issue of interest to only a small number of specialists – is in part because it mirrors a larger issue where the realms of political philosophy, ethics, and biology intersect: the ancient issue of individual interests versus group interests. On one hand, the so-called "Social Darwinians" (roughly, those who would use the "survival of the fittest" of Darwinian evolution to justify the cutthroat competitiveness of laissez-faire capitalism[11]) declaim that the world is an inherently competitive "dog eat dog" jungle, where every individual has to look out for himself. The writer Ayn Rand damned "altruism" and declared selfishness a virtue.[12] The Social Darwinists' view is derived from Charles Darwin's interpretation of evolution by natural selection, which is explicitly competitive ("survival of the fittest"), Malthusian ("struggle for existence"), even gladiatorial ("red in tooth and claw"), and permeated by the Victorian laissez-faire ethos of Darwin and his disciples (such as T. H. Huxley and Herbert Spencer). What they read into the theory was then read out by Social Darwinians as scientific justification for their social and economic views (such as poverty being a natural condition and social reform an unnatural meddling).[13]
Such views of evolution, competition, and the survival of the fittest are explicit in the ethos of modern capitalism, as epitomized by industrialist Andrew Carnegie in The Gospel of Wealth:
[W]hile the law [of competition] may be sometimes hard for the individual, it is best for the race, because it ensures the survival of the fittest in every department. We accept and welcome, therefore, as conditions to which we must accommodate ourselves, great inequality of environment; the concentration of business, industrial and commercial, in the hands of the few; and the law of competition between these, as being not only beneficial, but essential to the future progress of the race. (Carnegie 1900)
While the validity of extrapolating moral and political views from science is questionable, the significance of such views in modern society is undoubtable.
The social contract and morality
On the other hand, other philosophers have long observed that cooperation in the form of a "social contract" is necessary for human society, but saw no way of attaining that short of a coercive authority.
As Thomas Hobbes wrote in Leviathan:
[T]here must be some coercive power to compel men equally to the performance of their covenants by the terror of some punishment greater than the benefit they expect by the breach of their covenant.... (Hobbes 1651, p. 120)
[C]ovenants without the sword are but words.... (Hobbes 1651, p. 1139)
And Jean Jacques Rousseau in The Social Contract:
[The social contract] can arise only where several persons come together: but, as the force and liberty of each man are the chief instruments of his self-preservation, how can he pledge them without harming his own interests, and neglecting the care he owes himself? (Rousseau 1762, p. 13)
In order then that the social contract may not be an empty formula, it tacitly includes the undertaking, which alone can give force to the rest, that whoever refuses to obey the general will shall be compelled to do so by the whole body. This means nothing less than that he will be forced to be free.... (Rousseau 1762, p. 18)
Even Herman Melville, in Moby-Dick, has the cannibal harpooner Queequeg explain why he has saved the life of someone who had been jeering him as so:
"It's a mutual, joint-stock world, in all meridians. We cannibals must help these Christians." (Melville 1851, p. 96)
The original role of government is to provide the coercive power to enforce the social contract (and in commercial societies, contracts and covenants generally). Where government does not exist or cannot reach it is often deemed the role of religion to promote prosocial and moral behavior, but this tends to depend on threats of hell-fire (what Hobbes called "the terror of some power"); such inducements seem more mystical than rational, and philosophers have been hard-pressed to explain why self-interest should yield to morality, why there should be any duty to be "good".[14]
Yet cooperation, and even altruism and morality, are prevalent, even in the absence of coercion, even though it seems that a properly self-regarding individual should reject all such social strictures and limitations. As early as 1890 the Russian naturalist Petr Kropotkin observed that the species that survived were where the individuals cooperated, that "mutual aid" (cooperation) was found at all levels of existence.[15] By the 1960s biologists and zoologists were noting many instances in the real "jungle" where real animals – presumably unfettered by conscience and not corrupted by altruistic liberals – and even microbes (see microbial cooperation) were cooperating.[16]
Darwin's theory of natural selection is a profoundly powerful explanation of how evolution works; its undoubted success strongly suggests an inherently antagonistic relationship between unrelated individuals. Yet cooperation is prevalent, seems beneficial, and even seems to be essential to human society. Explaining this seeming contradiction, and accommodating cooperation, and even altruism, within Darwinian theory is a central issue in the theory of cooperation.
Modern developments
Darwin's explanation of how evolution works is quite simple, but the implications of how it might explain complex phenomena are not at all obvious; it has taken over a century to elaborate (see modern synthesis).[17] Explaining how altruism – which by definition reduces personal fitness – can arise by natural selection is a particular problem, and the central theoretical problem of sociobiology.[18]
A possible explanation of altruism is provided by the theory of group selection (first suggested by Darwin himself while grappling with issue of social insects[19]) which argues that natural selection can act on groups: groups that are more successful – for any reason, including learned behaviors – will benefit the individuals of the group, even if they are not related. It has had a powerful appeal, but has not been fully persuasive, in part because of difficulties regarding cheaters that participate in the group without contributing.[20]
Another explanation is provided by the genetic kinship theory of William D. Hamilton:[21] if a gene causes an individual to help other individuals that carry copies of that gene, then the gene has a net benefit even with the sacrifice of a few individuals. The classic example is the social insects, where the workers – which are sterile, and therefore incapable of passing on their genes – benefit the queen, who is essentially passing on copies of "their" genes. This is further elaborated in the "selfish gene" theory of Richard Dawkins, that the unit of evolution is not the individual organism, but the gene.[22] (As stated by Wilson: "the organism is only DNA's way of making more DNA."[23]) However, kinship selection works only where the individuals involved are closely related; it fails to explain the presence of altruism and cooperation between unrelated individuals, particularly across species.
In a 1971 paper[24] Robert Trivers demonstrated how reciprocal altruism can evolve between unrelated individuals, even between individuals of entirely different species. And the relationship of the individuals involved is exactly analogous to the situation in a certain form of the Prisoner's Dilemma.[25] The key is that in the iterated Prisoner's Dilemma, or IPD, both parties can benefit from the exchange of many seemingly altruistic acts. As Trivers says, it "take[s] the altruism out of altruism."[26] The Randian premise that self-interest is paramount is largely unchallenged, but turned on its head by recognition of a broader, more profound view of what constitutes self-interest.
It does not matter why the individuals cooperate. The individuals may be prompted to the exchange of "altruistic" acts by entirely different genes, or no genes in particular, but both individuals (and their genomes) can benefit simply on the basis of a shared exchange. In particular, "the benefits of human altruism are to be seen as coming directly from reciprocity – not indirectly through non-altruistic group benefits".[27]
Trivers' theory is very powerful. Not only can it replace group selection, it also predicts various observed behavior, including moralistic aggression,[28] gratitude and sympathy, guilt and reparative altruism,[29] and development of abilities to detect and discriminate against subtle cheaters.
The benefits of such reciprocal altruism was dramatically demonstrated by a pair of tournaments held by Robert Axelrod around 1980.
Axelrod's tournaments
Axelrod initially solicited strategies from other game theorists to compete in the first tournament. Each strategy was paired with each other strategy for 200 iterations of a Prisoner's Dilemma game, and scored on the total points accumulated through the tournament. The winner was a very simple strategy submitted by Anatol Rapoport called "TIT FOR TAT" (TFT) that cooperates on the first move, and subsequently echoes (reciprocates) what the other player did on the previous move. The results of the first tournament were analyzed and published, and a second tournament held to see if anyone could find a better strategy. TIT FOR TAT won again. Axelrod analyzed the results, and made some interesting discoveries about the nature of cooperation, which he describes in his book[30]
In both actual tournaments and various replays the best performing strategies were nice:[31] that is, they were never the first to defect. Many of the competitors went to great lengths to gain an advantage over the "nice" (and usually simpler) strategies, but to no avail: tricky strategies fighting for a few points generally could not do as well as nice strategies working together. TFT (and other "nice" strategies generally) "won, not by doing better than the other player, but by eliciting cooperation [and] by promoting the mutual interest rather than by exploiting the other's weakness."[32]
Being "nice" can be beneficial, but it can also lead to being suckered. To obtain the benefit – or avoid exploitation – it is necessary to be provocable to both retaliation and forgiveness. When the other player defects, a nice strategy must immediately be provoked into retaliatory defection.[33] The same goes for forgiveness: return to cooperation as soon as the other player does. Overdoing the punishment risks escalation, and can lead to an "unending echo of alternating defections" that depresses the scores of both players.[34]
Most of the games that game theory had heretofore investigated are "zero-sum" – that is, the total rewards are fixed, and a player does well only at the expense of other players. But real life is not zero-sum. Our best prospects are usually in cooperative efforts. In fact, TFT cannot score higher than its partner; at best it can only do "as good as". Yet it won the tournaments by consistently scoring a strong second-place with a variety of partners.[35] Axelrod summarizes this as don't be envious;[36] in other words, don't strive for a payoff greater than the other player's.[37]
In any IPD game there is a certain maximum score each player can get by always cooperating. But some strategies try to find ways of getting a little more with an occasional defection (exploitation). This can work against some strategies that are less provocable or more forgiving than TIT FOR TAT, but generally they do poorly. "A common problem with these rules is that they used complex methods of making inferences about the other player [strategy] – and these inferences were wrong."[38] Against TFT one can do no better than to simply cooperate.[39] Axelrod calls this clarity. Or: don't be too clever.[40]
The success of any strategy depends on the nature of the particular strategies it encounters, which depends on the composition of the overall population. To better model the effects of reproductive success Axelrod also did an "ecological" tournament, where the prevalence of each type of strategy in each round was determined by that strategy's success in the previous round. The competition in each round becomes stronger as weaker performers are reduced and eliminated. The results were amazing: a handful of strategies – all "nice" – came to dominate the field.[41] In a sea of non-nice strategies the "nice" strategies – provided they were also provokable – did well enough with each other to offset the occasional exploitation. As cooperation became general the non-provocable strategies were exploited and eventually eliminated, whereupon the exploitive (non-cooperating) strategies were out-performed by the cooperative strategies.
In summary, success in an evolutionary "game" correlated with the following characteristics:
- Be nice: cooperate, never be the first to defect.
- Be provocable: return defection for defection, cooperation for cooperation.
- Don't be envious: focus on maximizing your own 'score', as opposed to ensuring your score is higher than your 'partner's'.
- Don't be too clever: or, don't try to be tricky. Clarity is essential for others to cooperate with you.
Foundation of reciprocal cooperation
The lessons described above apply in environments that support cooperation, but whether cooperation is supported at all depends crucially on the probability (called ω [omega]) that the players will meet again,[42] also called the discount parameter or, poetically, the shadow of the future. When ω is low – that is, the players have a negligible chance of meeting again – each interaction is effectively a single-shot Prisoner's Dilemma game, and one might as well defect in all cases (a strategy called "ALL D"), because even if one cooperates there is no way to keep the other player from exploiting that. But in the iterated PD the value of repeated cooperative interactions can become greater than the benefit/risk of a single exploitation (which is all that a strategy like TFT will tolerate).
Curiously, rationality and deliberate choice are not necessary, nor trust nor even consciousness,[43] as long as there is a pattern that benefits both players (e.g., increases fitness), and some probability of future interaction. Often the initial mutual cooperation is not even intentional, but having "discovered" a beneficial pattern both parties respond to it by continuing the conditions that maintain it.
This implies two requirements for the players, aside from whatever strategy they may adopt. First, they must be able to recognize other players, to avoid exploitation by cheaters. Second, they must be able to track their previous history with any given player, in order to be responsive to that player's strategy.[44]
Even when the discount parameter ω is high enough to permit reciprocal cooperation there is still a question of whether and how cooperation might start. One of Axelrod's findings is that when the existing population never offers cooperation nor reciprocates it – the case of ALL D – then no nice strategy can get established by isolated individuals; cooperation is strictly a sucker bet. (The "futility of isolated revolt".[45]) But another finding of great significance is that clusters of nice strategies can get established. Even a small group of individuals with nice strategies with infrequent interactions can yet do so well on those interactions to make up for the low level of exploitation from non-nice strategies.[46]
Cooperation becomes more complicated, however, as soon as more realistic models are assumed that for instance offer more than two choices of action, provide the possibility of gradual cooperation, make actions constrain future actions (path dependence), or in which interpreting the associate's actions is non-trivial (e.g. recognizing the degree of cooperation shown)[47]
Subsequent work
In 1984 Axelrod estimated that there were "hundreds of articles on the Prisoner's Dilemma cited in Psychological Abstracts",[48] and estimated that citations to The Evolution of Cooperation alone were "growing at the rate of over 300 per year".[49] To fully review this literature is infeasible. What follows are therefore only a few selected highlights.
Axelrod has a subsequent book, The Complexity of Cooperation,[50] which he considers a sequel to The Evolution of Cooperation. Other work on the evolution of cooperation has expanded to cover prosocial behavior generally,[51] and in religion,[52] other mechanisms for generating cooperation,[53] the IPD under different conditions and assumptions,[54] and the use of other games such as the Public Goods and Ultimatum games to explore deep-seated notions of fairness and fair play.[55] It has also been used to challenge the rational and self-regarding "economic man" model of economics,[56] and as a basis for replacing Darwinian sexual selection theory with a theory of social selection.[57]
Nice strategies are better able to invade if they have social structures or other means of increasing their interactions. Axelrod discusses this in chapter 8; in a later paper he and Rick Riolo and Michael Cohen[58] use computer simulations to show cooperation rising among agents who have negligible chance of future encounters but can recognize similarity of an arbitrary characteristic (such as a green beard). Whereas other studies[59] have shown that the only Iterated Prisoner's Dilemma strategies that resist invasion in a well-mixed evolving population are generous strategies.
When an IPD tournament introduces noise (errors or misunderstandings) TFT strategies can get trapped into a long string of retaliatory defections, thereby depressing their score. TFT also tolerates "ALL C" (always cooperate) strategies, which then give an opening to exploiters.[60] In 1992 Martin Nowak and Karl Sigmund demonstrated a strategy called Pavlov (or "win–stay, lose–shift") that does better in these circumstances.[61] Pavlov looks at its own prior move as well as the other player's move. If the payoff was R or P (see "Prisoner's Dilemma", above) it cooperates; if S or T it defects.
In a 2006 paper Nowak listed five mechanisms by which natural selection can lead to cooperation.[62] In addition to kin selection and direct reciprocity, he shows that:
- Indirect reciprocity is based on knowing the other player's reputation, which is the player's history with other players. Cooperation depends on a reliable history being projected from past partners to future partners.
- Network reciprocity relies on geographical or social factors to increase the interactions with nearer neighbors; it is essentially a virtual group.
- Group selection[63] assumes that groups with cooperators (even altruists) will be more successful as a whole, and this will tend to benefit all members.
The payoffs in the Prisoner's Dilemma game are fixed, but in real life defectors are often punished by cooperators. Where punishment is costly there is a second-order dilemma amongst cooperators between those who pay the cost of enforcement and those who do not.[64] Other work has shown that while individuals given a choice between joining a group that punishes free-riders and one that does not initially prefer the sanction-free group, yet after several rounds they will join the sanctioning group, seeing that sanctions secure a better payoff.[65]
In small populations or groups there is the possibility that indirect reciprocity (reputation) can interact with direct reciprocity (e.g. tit for tat) with neither strategy dominating the other.[66] The interactions between these strategies can give rise to dynamic social networks which exhibit some of the properties observed in empirical networks[67] If network structure and choices in the Prisoner's dilemma co-evolve, then cooperation can survive. In the resulting networks cooperators will be more centrally located than defectors who will tend to be in the periphery of the network.[68]
And there is the very intriguing paper "The Coevolution of Parochial Altruism and War" by Jung-Kyoo Choi and Samuel Bowles. From their summary:
Altruism—benefiting fellow group members at a cost to oneself —and parochialism—hostility towards individuals not of one's own ethnic, racial, or other group—are common human behaviors. The intersection of the two—which we term "parochial altruism"—is puzzling from an evolutionary perspective because altruistic or parochial behavior reduces one's payoffs by comparison to what one would gain from eschewing these behaviors. But parochial altruism could have evolved if parochialism promoted intergroup hostilities and the combination of altruism and parochialism contributed to success in these conflicts.... [Neither] would have been viable singly, but by promoting group conflict they could have evolved jointly.[69]
They do not claim that humans have actually evolved in this way, but that computer simulations show how war could be promoted by the interaction of these behaviors. A crucial open research question, thus, is how realistic the assumptions are which these simulation models are based on.[70]
Summary and current understanding
When Richard Dawkins set out to "examine the biology of selfishness and altruism" in The Selfish Gene, he reinterpreted the basis of evolution, and therefore of altruism. He was "not advocating a morality based on evolution",[71] and even felt that "we must teach our children altruism, for we cannot expect it to be part of their biological nature."[72] But John Maynard Smith[73] was showing that behavior could be subject to evolution, Robert Trivers had shown that reciprocal altruism is strongly favored by natural selection to lead to complex systems of altruistic behavior (supporting Kropotkin's argument that cooperation is as much a factor of evolution as competition[74]), and Axelrod's dramatic results showed that in a very simple game the conditions for survival (be "nice", be provocable, promote the mutual interest) seem to be the essence of morality. While this does not yet amount to a science of morality, the game theoretic approach has clarified the conditions required for the evolution and persistence of cooperation, and shown how Darwinian natural selection can lead to complex behavior, including notions of morality, fairness, and justice. It is shown that the nature of self-interest is more profound than previously considered, and that behavior that seems altruistic may, in a broader view, be individually beneficial. Extensions of this work to morality[75] and the social contract[76] may yet resolve the old issue of individual interests versus group interests.
Software
Several software packages have been created to run prisoner's dilemma simulations and tournaments, some of which have available source code.
- The source code for the second tournament run by Robert Axelrod (written by Axelrod and many contributors in Fortran) is available online
- PRISON, a library written in Java, last updated in 1999
- Axelrod-Python, written in Python
Recommended reading
- Axelrod, Robert; Hamilton, William D. (27 March 1981), "The Evolution of Cooperation" (PDF), Science, 211 (4489): 1390–96, Bibcode:1981Sci...211.1390A, doi:10.1126/science.7466396, PMID 7466396
- Axelrod, Robert (1984), The Evolution of Cooperation, Basic Books, ISBN 0-465-02122-0
- Axelrod, Robert (2006), The Evolution of Cooperation (Revised ed.), Perseus Books Group, ISBN 0-465-00564-0
- Axelrod, Robert (1997), "The Complexity of Cooperation: Agent-Based Models of Competition and Collaboration", Complexity, 3 (3), Princeton University Press: 46–48, Bibcode:1998Cmplx...3c..46C, doi:10.1002/(SICI)1099-0526(199801/02)3:3<46::AID-CPLX6>3.0.CO;2-K, ISBN 0-691-01567-8
- Dawkins, Richard (1989) [1976], The Selfish Gene (2nd ed.), Oxford Univ. Press, ISBN 0-19-286092-5
- Gould, Stephen Jay (June 1997), "Kropotkin was no crackpot", Natural History, 106: 12–21
- Ridley, Matt (1996), The Origins of Virtue, Viking (Penguin Books), ISBN 0-670-86357-2
- Sigmund, Karl; Fehr, Ernest; Nowak, Martin A. (January 2002), "The Economics of Fair Play" (PDF), Scientific American, vol. 286, no. 1, pp. 82–87, Bibcode:2002SciAm.286a..82S, doi:10.1038/scientificamerican0102-82, PMID 11799620, archived from the original (PDF) on 18 May 2011
- Trivers, Robert L. (March 1971), "The Evolution of Reciprocal Altruism" (PDF), Quarterly Review of Biology, 46: 35–57, doi:10.1086/406755[dead link ]
- Vogel, Gretchen (20 February 2004), "News Focus: The Evolution of the Golden Rule", Science, 303 (5661): 1128–31, doi:10.1126/science.303.5661.1128, PMID 14976292
See also
References
- ^ Axelrod's book was summarized in Douglas Hofstadter's May 1983 "Metamagical Themas" column in Scientific American (Hofstadter 1983) (reprinted in his book (Hofstadter 1985); see also Richard Dawkin's summary in the second edition of The Selfish Gene (Dawkins 1989 harvnb error: multiple targets (2×): CITEREFDawkins1989 (help), ch. 12).
- ^ Peress, Michael (2018). "Measuring the Research Productivity of Political Science Departments Using Google Scholar". PS: Political Science & Politics. 52 (2): 312–317. doi:10.1017/S1049096518001610. ISSN 1049-0965.
- ^ Morse & Kimball 1951, 1956
- ^ Williams 1954, 1966
- ^ See Poundstone (1992) for a good overview of the development of game theory.
- ^ Technically, the prisoner's dilemma is any two-person "game" where the payoffs are ranked in a certain way. If the payoff ("reward") for mutual cooperation is R, for mutual defection is P, the sucker gets only S, and the temptation payoff (provided the other player is suckered into cooperating) is T, then the payoffs need to be ordered T > R > P > S, and satisfy R > (T+S)/2. (Axelrod 1984, pp. 8–10, 206–207 harvnb error: multiple targets (2×): CITEREFAxelrod1984 (help)).
- ^ Axelrod 1984 harvnb error: multiple targets (2×): CITEREFAxelrod1984 (help), p. 216 n. 2; Poundstone 1992.
- ^ See Hardin (1968), "The Tragedy of the Commons".
- ^ "By Means of Natural Selection" being the subtitle of his work, On the Origin of Species.
- ^ Darwin 1859 , pp 75, 76, 320.
- ^ Bowler 1984, pp. 94–99, 269–70.
- ^ Rand 1961.
- ^ Bowler 1984, pp. 94–99
- ^ See Gauthier 1970 for a lively debate on morality and self-interest. Aristotle's comment on the effectiveness of philosophic argument: "For the many yield to complusion more than to argument." (Nichomachean Ethics, Book X, 1180a15, Irwin translation)
- ^ Kropotkin 1902, but originally published in the magazine Nineteenth Century starting in 1890.
- ^ Axelrod 1984, pp. 90 harvnb error: multiple targets (2×): CITEREFAxelrod1984 (help); Trivers 1971 harvnb error: multiple targets (2×): CITEREFTrivers1971 (help).
- ^ See Bowler (1984) generally.
- ^ Wilson 1975 .
- ^ Darwin 1859, p. 237 .
- ^ Axelrod & Hamilton 1981 harvnb error: multiple targets (2×): CITEREFAxelrodHamilton1981 (help); Trivers 1971, pp. 44, 48 harvnb error: multiple targets (2×): CITEREFTrivers1971 (help); Bowler 1984, p. 312; Dawkins 1989 harvnb error: multiple targets (2×): CITEREFDawkins1989 (help), pp. 7–10, 287, ch. 7 generally.
- ^ Hamilton 1964.
- ^ Dawkins 1989 harvnb error: multiple targets (2×): CITEREFDawkins1989 (help), p. 11.
- ^ Wilson 1975 , p. 3.
- ^ Trivers 1971 harvnb error: multiple targets (2×): CITEREFTrivers1971 (help).
- ^ Trivers 1971, pp. 38–39 harvnb error: multiple targets (2×): CITEREFTrivers1971 (help).
- ^ Trivers 1971, p. 35 harvnb error: multiple targets (2×): CITEREFTrivers1971 (help).
- ^ Trivers 1971, p. 47 harvnb error: multiple targets (2×): CITEREFTrivers1971 (help). More pointedly, Trivers also said (p. 48):"No concept of group advantage is necessary to explain the function of human altruistic behavior."
- ^ To deter cheaters from exploiting altruists. And "in extreme cases, perhaps, to select directly against the unreciprocating individual by injuring, killing, or exiling him." (Trivers 1971, p. 49) harv error: multiple targets (2×): CITEREFTrivers1971 (help)
- ^ Analogous to the situation in the IPD where, having once defected, a player voluntarily elects to cooperate, even in anticipation of being suckered, in order to return to a state of mutual cooperation. As Trivers says (p. 50): "It seems plausible ... that the emotion of guilt has been selected for in humans partly in order to motivate the cheater to compensate his misdeed and to behave reciprocally in the future...."
- ^ Axelrod 1984 harvnb error: multiple targets (2×): CITEREFAxelrod1984 (help).
- ^ Axelrod 1984, p. 113 harvnb error: multiple targets (2×): CITEREFAxelrod1984 (help).
- ^ Axelrod 1984, p. 130 harvnb error: multiple targets (2×): CITEREFAxelrod1984 (help).
- ^ Axelrod 1984, pp. 62, 211 harvnb error: multiple targets (2×): CITEREFAxelrod1984 (help).
- ^ Axelrod 1984, p. 186 harvnb error: multiple targets (2×): CITEREFAxelrod1984 (help).
- ^ Axelrod 1984, p. 112 harvnb error: multiple targets (2×): CITEREFAxelrod1984 (help).
- ^ Axelrod 1984, pp. 110–113 harvnb error: multiple targets (2×): CITEREFAxelrod1984 (help).
- ^ Axelrod 1984, p. 25 harvnb error: multiple targets (2×): CITEREFAxelrod1984 (help).
- ^ Axelrod 1984, p. 120 harvnb error: multiple targets (2×): CITEREFAxelrod1984 (help).
- ^ Axelrod 1984, pp. 47, 118 harvnb error: multiple targets (2×): CITEREFAxelrod1984 (help).
- ^ Axelrod 1984, pp. 120+ harvnb error: multiple targets (2×): CITEREFAxelrod1984 (help).
- ^ Axelrod 1984, pp. 48–53 harvnb error: multiple targets (2×): CITEREFAxelrod1984 (help).
- ^ Axelrod 1984, p. 13 harvnb error: multiple targets (2×): CITEREFAxelrod1984 (help).
- ^ Axelrod 1984, pp. 18, 174 harvnb error: multiple targets (2×): CITEREFAxelrod1984 (help).
- ^ Axelrod 1984, p. 174 harvnb error: multiple targets (2×): CITEREFAxelrod1984 (help).
- ^ Axelrod 1984, p. 150 harvnb error: multiple targets (2×): CITEREFAxelrod1984 (help).
- ^ Axelrod 1984, pp. 63–68, 99 harvnb error: multiple targets (2×): CITEREFAxelrod1984 (help)
- ^ Prechelt, Lutz (1996). "INCA: A multi-choice model of cooperation under restricted communication". Biosystems. 37 (1–2): 127–134. doi:10.1016/0303-2647(95)01549-3.
- ^ Axelrod 1984, pp. 28 harvnb error: multiple targets (2×): CITEREFAxelrod1984 (help).
- ^ Axelrod 1984, pp. 3 harvnb error: multiple targets (2×): CITEREFAxelrod1984 (help).
- ^ Axelrod 1997 harvnb error: multiple targets (2×): CITEREFAxelrod1997 (help).
- ^ Boyd 2006; Bowles 2006.
- ^ Norenzayan & Shariff 2008.
- ^ Nowak 2006.
- ^ Axelrod & Dion 1988; Hoffman 2000 categorizes and summarizes over 50 studies
- ^ Nowak, Page & Sigmund 2000; Sigmund, Fehr & Nowak 2002 harvnb error: multiple targets (2×): CITEREFSigmundFehrNowak2002 (help).
- ^ Camerer & Fehr 2006.
- ^ Roughgarden, Oishi & Akcay 2006.
- ^ Riolo, Cohen & Axelrod 2001.
- ^ Stewart and Plotkin (2013)
- ^ Axelrod (1984, pp. 136–138) harvtxt error: multiple targets (2×): CITEREFAxelrod1984 (help) has some interesting comments on the need to suppress universal cooperators. See also a similar theme in Piers Anthony's novel Macroscope.
- ^ Nowak & Sigmund 1992; see also Milinski 1993.
- ^ Nowak 2006;
- ^ Here group selection is not a form of evolution, which is problematical (see Dawkins (1989) harvtxt error: multiple targets (2×): CITEREFDawkins1989 (help), ch. 7), but a mechanism for evolving cooperation.
- ^ Hauert & others 2007.
- ^ Gürerk, Irlenbusch & Rockenbach 2006
- ^ Phelps, S., Nevarez, G. & Howes, A., 2009. The effect of group size and frequency of encounter on the evolution of cooperation. In LNCS, Volume 5778, ECAL 2009, Advances in Artificial Life: Darwin meets Von Neumann. Budapest: Springer, pp. 37–44. [1].
- ^ Phelps, S (2012). "Emergence of social networks via direct and indirect reciprocity" (PDF). Autonomous Agents and Multi-Agent Systems. doi:10.1007/s10458-012-9207-8.
- ^ Fosco and Mengel (2011)
- ^ Choi & Bowles 2007, p. 636.
- ^ Rusch 2014.
- ^ Dawkins 1989 harvnb error: multiple targets (2×): CITEREFDawkins1989 (help), pp. 1 and 2.
- ^ Dawkins 1989, p. 139 harvnb error: multiple targets (2×): CITEREFDawkins1989 (help).
- ^ Maynard Smith 1976, 1978, 1982
- ^ Kropotkin 1902. Why Kropotkin did not prevail is interesting – see Stephen Jay Gould's article "Kropotkin was no crackpot" (Gould 1997) harv error: multiple targets (2×): CITEREFGould1997 (help) – but beyond the scope of this article.
- ^ Gauthier 1986.
- ^ Kavka 1986; Binmore 1994, 1998a, 2004
Bibliography
Most of these references are to the scientific literature, to establish the authority of various points in the article. A few references of lesser authority, but greater accessibility are also included.
- Axelrod, Robert (1984), The Evolution of Cooperation, Basic Books, ISBN 0-465-02122-0
- Axelrod, Robert (1997), "The Complexity of Cooperation: Agent-Based Models of Competition and Collaboration", Complexity, 3 (3), Princeton University Press: 46–48, Bibcode:1998Cmplx...3c..46C, doi:10.1002/(SICI)1099-0526(199801/02)3:3<46::AID-CPLX6>3.0.CO;2-K
- Axelrod, Robert (July 2000), "On Six Advances in Cooperation Theory" (PDF), Analyse & Kritik, 22: 130–151
- Axelrod, Robert (2006), The Evolution of Cooperation (Revised ed.), Perseus Books Group, ISBN 0-465-00564-0
- Axelrod, Robert; D'Ambrosio, Lisa (1996), Annotated Bibliography on the Evolution of Cooperation (PDF)
- Axelrod, Robert; Dion, Douglas (9 December 1988), "The Further Evolution of Cooperation" (PDF), Science, 242 (4884): 1385–90, Bibcode:1988Sci...242.1385A, doi:10.1126/science.242.4884.1385, PMID 17802133
- Axelrod, Robert; Hamilton, William D. (27 March 1981), "The Evolution of Cooperation" (PDF), Science, 211 (4489): 1390–96, Bibcode:1981Sci...211.1390A, doi:10.1126/science.7466396, PMID 7466396
- Binmore, Kenneth G. (1994), Game Theory and the Social Contract: Vol. 1, Playing Fair, MIT Press
- Binmore, Kenneth G. (1998a), Game Theory and the Social Contract: Vol. 2, Just Playing, MIT Press
- Binmore, Kenneth G. (1998b), Review of 'The Complexity of Cooperation'
- Binmore, Kenneth G. (2004), "Reciprocity and the social contract" (PDF), Politics, Philosophy & Economics, 3: 5–6, doi:10.1177/1470594X04039981
- Bowler, Peter J. (1984), Evolution: The History of an Idea, Univ. of California Press, ISBN 0-520-04880-6
- Bowles, Samuel (8 December 2006), "Group Competition, Reproductive Leveling, and the Evolution of Human Altruism", Science, 314 (5805): 1569–72, Bibcode:2006Sci...314.1569B, doi:10.1126/science.1134829, PMID 17158320
- Bowles, Samuel; Choi, Jung-Koo; Hopfensitz, Astrid (2003), "The co-evolution of individual behaviors and social institutions" (PDF), Journal of Theoretical Biology, 223 (2): 135–147, doi:10.1016/S0022-5193(03)00060-2, PMID 12814597
- Boyd, Robert (8 December 2006), "The Puzzle of Human Sociality" (PDF), Science, 314 (5805): 1555–56, doi:10.1126/science.1136841, PMID 17158313
- Camerer, Colin F.; Fehr, Ernst (6 January 2006), "When Does 'Economic Man' Dominate Social Behavior?" (PDF), Science, 311 (5757): 47–52, Bibcode:2006Sci...311...47C, doi:10.1126/science.1110600, PMID 16400140
- Carnegie, Andrew (1900), The Gospel of Wealth, and Other Timely Essays
- Choi, Jung-Kyoo; Bowles, Samuel (26 October 2007), "The Coevolution of Parochial Altruism and War", Science, 318: 636–40, Bibcode:2007Sci...318..636C, doi:10.1126/science.1144237, PMID 17962562
- Darwin, Charles (1964) [1859], On the Origin of Species (A Facsimile of the First ed.), Harvard Univ. Press
- Dawkins, Richard (1989) [1976], The Selfish Gene (2nd ed.), Oxford Univ. Press, ISBN 0-19-286092-5
- Fosco, Constanza; Mengel, Friederike (2011). "Cooperation through Imitation and Exclusion in Networks" (PDF). Journal of Economic Dynamics and Control. 35 (5): 641–658. doi:10.1016/j.jedc.2010.12.002.
- Gauthier, David P. (1986), Morals by agreement, Oxford Univ. Press
- Gauthier, David P., ed. (1970), Morality and Rational Self-Interest, Prentice-Hall
- Gould, Stephen Jay (June 1997), "Kropotkin was no crackpot", Natural History, 106: 12–21
- Gürerk, Özgür; Irlenbush, Bernd; Rockenbach, Bettina (7 April 2006), "The Competitive Advantage of Sanctioning Institutions" (PDF), Science, 312 (5770): 108–11, Bibcode:2006Sci...312..108G, doi:10.1126/science.1123633, PMID 16601192, archived from the original (PDF) on 19 July 2011
- Hamilton, William D. (1963), "The Evolution of Altruistic Behavior" (PDF), American Naturalist, 97 (896): 354–56, doi:10.1086/497114[dead link ]
- Hamilton, William D. (1964), "The Genetical Evolution of Social Behavior" (PDF), Journal of Theoretical Biology, 7 (1): 1–16, 17–52, doi:10.1016/0022-5193(64)90038-4, PMID 5875341, archived from the original (PDF) on 29 December 2009
- Hardin, Garrett (13 December 1968), "The Tragedy of the Commons" (PDF), Science, 162 (3859): 1243–1248, Bibcode:1968Sci...162.1243H, doi:10.1126/science.162.3859.1243, PMID 5699198, archived from the original (PDF) on 19 July 2011
- Hauert, Christoph; Traulsen, Arne; Brandt, Hannelore; Nowak, Martin A.; Sigmund, Karl (29 June 2007), "Via Freedom to Coercion: The Emergence of Costly Punishment" (PDF), Science, 316 (5833): 1905–07, Bibcode:2007Sci...316.1905H, doi:10.1126/science.1141588, PMC 2430058, PMID 17600218, archived from the original (PDF) on 18 May 2011
- Henrich, Joseph (7 April 2006), "Cooperation, Punishment, and the Evolution of Human Institutions" (PDF), Science, 312 (5770): 60–61, doi:10.1126/science.1126398, PMID 16601179, archived from the original (PDF) on 29 June 2011
- Henrich, Joseph; et al. (23 June 2007), "Costly Punishment Across Human Societies" (PDF), Science, 312 (5781): 1767–70, Bibcode:2006Sci...312.1767H, doi:10.1126/science.1127333, PMID 16794075
- Hobbes, Thomas (1958) [1651], Leviathan, Bobbs-Merrill [and others]
- Hoffman, Robert (2000), "Twenty Years on: The Evolution of Cooperation Revisited" (PDF), Journal of Artificial Societies and Social Simulation, 3 (2): 1390–1396
- Hofstadter, Douglas R. (May 1983), "Metamagical Themas: Computer Tournaments of the Prisoner's Dilemma Suggest How Cooperation Evolves", Scientific American, vol. 248, pp. 16–26
- Hofstadter, Douglas R. (1985), "The Prisoner's Dilemma Computer Tournaments and the Evolution of Cooperation", Metamagical Themas: Questing for the Essence of Mind and Pattern, Basic Books, pp. 715–730, ISBN 0-465-04540-5
- Kavka, Gregory S. (1986), Hobbesian moral and political theory, Princeton Univ. Press
- Kropotkin, Petr (1902), Mutual Aid: A Factor in Evolution
- Maynard Smith, John (1976), "Evolution and the Theory of Games", American Scientist, vol. 61, no. 1, pp. 41–45, Bibcode:1976AmSci..64...41M
- Maynard Smith, John (September 1978), "The Evolution of Behavior", Scientific American, vol. 239, no. 3, pp. 176–92, Bibcode:1978SciAm.239c.176S, doi:10.1038/scientificamerican0978-176, PMID 705322
- Maynard Smith, John (1982), Evolution and the Theory of Games, Cambridge Univ. Press
- Melville, Herman (1977) [1851], Moby-Dick, Bobbs-Merrill [and others]
- Milinski, Manfred (1 July 1993), "News and Views: Cooperation Wins and Stays", Nature, 364 (6432): 12–13, Bibcode:1993Natur.364...12M, doi:10.1038/364012a0, PMID 8316291
- Morse, Phillip M.; Kimball, George E. (1951), Methods of Operations Research
{{citation}}
: Missing or empty|title=
(help) - Morse, Phillip M.; Kimball, George E. (1956), "How to Hunt a Submarine", in Newman, James R. (ed.), The World of Mathematics, vol. 4, Simon and Schuster, pp. 2160–79
- Norenzayan, Ara; Shariff, Azim F. (3 October 2008), "The Origin and Evolution of Religious Prosociality" (PDF), Science, 322 (5898): 58–62, Bibcode:2008Sci...322...58N, doi:10.1126/science.1158757, PMID 18832637
- Nowak, Martin A (8 December 2006), "Five Rules for the Evolution of Cooperation" (PDF), Science, 314 (5805): 1560–63, Bibcode:2006Sci...314.1560N, doi:10.1126/science.1133755, PMC 3279745, PMID 17158317, archived from the original (PDF) on 18 May 2011
- Nowak, Martin A; Page, Karen M.; Sigmund, Karl (8 September 2000), "Fairness Versus Reason in the Ultimatum Game" (PDF), Science, 289 (5485): 1773–75, Bibcode:2000Sci...289.1773N, doi:10.1126/science.289.5485.1773, PMID 10976075, archived from the original (PDF) on 19 July 2011
- Nowak, Martin A.; Sigmund, Karl (16 January 1992), "Tit For Tat in Heterogenous Populations" (PDF), Nature, 355 (6016): 250–253, Bibcode:1985Natur.315..250T, doi:10.1038/315250a0, PMID 3889654, archived from the original (PDF) on 16 June 2011
- Nowak, Martin A.; Sigmund, Karl (1 July 1993), "A strategy of win-stay, lose-shift that outperforms tit for tat in Prisoner's Dilemma" (PDF), Nature, 364 (6432): 56–58, Bibcode:1993Natur.364...56N, doi:10.1038/364056a0, PMID 8316296, archived from the original (PDF) on 4 July 2008
- Poundstone, William (1992), "Prisoner's Dilemma: John von Neumann, Game Theory and the Puzzle of the Bomb", Physics Today, 45 (9), Anchor Books: 73–74, Bibcode:1992PhT....45i..73P, doi:10.1063/1.2809809, ISBN 0-385-41580-X
- de Quervain, D. J.-F.; Fischbacker, Urs; Treyer, Valerie; Schellhammer, Melanie; Schnyder, Ulrich; Buck, Alfred; Fehr, Ernst (24 August 2004), "The Neural Basis of Altruistic Punishment", Science, 305: 1254–1258, Bibcode:2004Sci...305.1254D, doi:10.1126/science.1100735, PMID 15333831
- Rand, Ayn (1961), The Virtue of Selfishness: A New Concept of Egoism, The New American Library
- Rapoport, Anatol; Chammah, Albert M. (1965), Prisoner's Dilemma, Univ. of Michigan Press
- Riolo, Rick L.; Cohen, Michael D.; Axelrod, Robert (23 November 2001), "Evolution of cooperation without reciprocity" (PDF), Nature, 414 (6862): 441–43, Bibcode:2001Natur.414..441R, doi:10.1038/35106555, hdl:2027.42/62686, PMID 11719803
- Roughgarden, Joan; Oishi, Meeko; Akcay, Erol (17 February 2006), "Reproductive Social Behavior: Cooperative Games to Replace Sexual Selection" (PDF), Science, 311 (5763): 965–69, Bibcode:2006Sci...311..965R, doi:10.1126/science.1110105, PMID 16484485, archived from the original (PDF) on 21 July 2011
- Rousseau, Jean Jacques (1950) [1762], The Social Contract, E. P. Dutton & Co. [and others]
- Rusch, Hannes (2014), "The Evolutionary Interplay of Intergroup Conflict and Altruism in Humans: A Review of Parochial Altruism Theory and Prospects for its Extension", Proceedings of the Royal Society B: Biological Sciences, 281 (1794): 20141539, doi:10.1098/rspb.2014.1539, PMC 4211448, PMID 25253457
- Sanfey, Alan G. (26 October 2007), "Social Decision-Making: Insights from Game Theory and Neuroscience", Science, 318 (5850): 598–602, Bibcode:2007Sci...318..598S, doi:10.1126/science.1142996, PMID 17962552
- Sigmund, Karl; Fehr, Ernest; Nowak, Martin A. (January 2002), "The Economics of Fair Play" (PDF), Scientific American, vol. 286, no. 1, pp. 82–87, Bibcode:2002SciAm.286a..82S, doi:10.1038/scientificamerican0102-82, PMID 11799620, archived from the original (PDF) on 18 May 2011
- Stewart, Alexander; Plotkin, Joshua B. (17 September 2013), "From extortion to generosity, evolution in the Iterated Prisoner's Dilemma", Proceedings of the National Academy of Sciences, 110 (38): 15348–15353, Bibcode:2013PNAS..11015348S, doi:10.1073/pnas.1306246110, PMC 3780848, PMID 24003115
- Trivers, Robert L. (March 1971), "The Evolution of Reciprocal Altruism" (PDF), Quarterly Review of Biology, 46: 35–57, doi:10.1086/406755[dead link ]
- Vogel, Gretchen (20 February 2004), "News Focus: The Evolution of the Golden Rule", Science, 303 (5661): 1128–31, doi:10.1126/science.303.5661.1128, PMID 14976292
- Von Neumann, John; Morgenstern, Oskar (1944), "Theory of Games and Economic Behavior", Nature, 157 (3981), Princeton Univ. Press: 172, Bibcode:1946Natur.157..172R, doi:10.1038/157172a0
- Wade, Nicholas (20 March 2007), "Scientist Finds the Beginnings of Morality in Primitive Behavior", The New York Times: D3
- Williams, John D. (1954), The Compleat Strategyst, RAND Corp.
- Williams, John D. (1966), The Compleat Strategyst: being a primer on the theory of games of strategy (2nd ed.), McGraw-Hill Book Co.
- Wilson, Edward O. (2000) [1975], Sociobiology: The New Synthesis (25th Anniversary ed.), Harvard Univ. Press, ISBN 0-674-00235-0