Talk:Prisoner's dilemma/Archive 3

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1 Archive 2 Archive 3 Archive 4 Archive 5

learning psychology and game theory

Hello, I wrote that section and nobody has changed it yet. I'm no expert I'm just trying to draw attention to moral dilemmas and developmental psychology. Surely someone else can improve it. It needs references, qualification and perhaps, in part, refutation. Come on if you think your hard enough (joke!).

Andrew Francis



Your article seems a little too much abstract compared to those written by mathematicians or computer experts, Andrew.
Though I'm particulary curious about this topic. I want to assemble bibliography over this, if anybody else... I think of Jacques Lacan's "Le Temps Logique"(in Écrits) which is neglected but important (as so as Hofstadter's example) cross point of computing theories and of psychoanalysis. --NoirNoir 06:28, 16 May 2005 (UTC)

What is the rational decision?

The article makes a mistake, in that the prisoners do not have to be in contact to reach the mutual best decision. If the prisoners assume that both of them are rational, and bound to make the best rational decision, they will choose to cooperate.

  • Actually, the whole point of the problem is "What is the best rational decision?"
And that is to defect, so the original question presents a misunderstanding.
The whole point is that both decisions are equally rational or irrational. The article says the following "This illustrates that if the two had been able to communicate and cooperate, they would have been able to achieve a result better than what each could achieve alone". This needs to be specified more clearly -- the *total* jail time for both prisoners can be reduced to 1 year if both choose to cooperate, but this is not the best result for an individual (purely selfish) prisoner. The best result there is *no* jail time for oneself. Another issue this line implies that if the prisoners had only been able to communicate, they'd choose to cooperate. This is far from sure. Communication in advance in fact does not weaken this dilemma at all. The only thing needed to create this dilemma is the non-communication (by a reliable source) of the other person's actual decision before you make your own. If the other person's decision is known, there *is* a rational choice to make (whatever your aims may be for you and your fellow prisoner). --Martijn faassen 00:07, 15 Mar 2004 (UTC)
Even if the other player's decision is known, the rational (ie selfish) choice is to defect, because the payoff for unilateral defection is higher than the payoff for mutual cooperation, and the payoff for mutual defection is higher than the payoff for unilateral cooperation. Communication does not solve the dilemma. --Michael Rogers 03:19, 4 January 2006 (UTC)

Hello! Just a quick comment. Bear with me. Doing this for the first time. I believe i remember having read the "Prisoner's Dilemma" in one of Plato's dialogues ... it was a problem suggested by Socrates. So i believe that saying it was originally formulated by the gentlemen in the article may not be completely correct 03:49, 24 December 2007 (UTC)

The same misunderstanding AGAIN

A new paragraph in the article says:

The above formula, then, ensures that, whatever the precise numbers in each part of the payoff matrix, it is always 'better' for each player to defect regardless of what the other does.

No, that is wrong. Unless by putting "better" in quotes it is understood that it is not necessarily "better". It is better only for those who think that following the Nash equilibrium analysis is apt in the PD. What is being said is that there is NO DILEMMA. That what to do is obvious. No! This GT problem is famous for being something GT does not help resolve. Paul Beardsell 21:06, 12 Aug 2004 (UTC)


I'm not sure why the original author put better in quotes, but it is always better (and yes, Nash is appropriate here too). Let me break it down:

  • If the other player co-operates, I can defect or co-operate and it is better for me if I betray them by defecting
  • If the other player defects, I can defect or co-operate and it is better for me if I betray them by defecting

And by better I mean I 'score' higher in utility or reduced sentence or whatever measure is being used to define success.

Therefore defecting is always the best strategy, regardless of the opponent's choices. That's why it is the dominant strategy and why both players will always defect. It is also the only Nash Equilibrium for this game, because no player can do better by changing their strategy.

I'm repeating myself here but I just want to be clear: if my opponent is going to co-operate then I benefit to the maximum amount by defecting. If you could choose both you and your opponents decisions, this is what you would go with.

The only sense in which always defecting is not 'better' is that if three conditions are met I can do better,

A) I know that both players are going to defect and B) I can, with cast-iron certainty, get my opponent to co-operate and C) I can only do so by giving an unbreakable agreement to co-operate in turn

Achieving A is easy, if you achieve B and C then you have changed the rules sufficiently that it is no longer the Prisoner's Dilemma. (And that is what Game Theory teaches us about Prisoner's Dilemmas in the real world. Change the rules!)

DavidScotson 23:02, 12 Aug 2004 (UTC)


I really do understand all that you say. But I do not agree with it entirely: One of the issues is the use of the word "better". Because, let's be plain, the "better" result you contemplate is that they both confess. Whereas we know that there is a BETTER result: That they both deny! That they feel compelled to confess is perhaps true but it is not "better". Paul Beardsell 13:35, 13 Aug 2004 (UTC)

Rationality doesn't always lead to pareto optimality when costless and enforceable side contracts are not possible.Profundity06 22:44, 28 April 2006 (UTC)

The link that I have recently placed at the end of the article is to a document which does a better(!) job than our article at avoiding undefined wishy-washy terms. You might read that and say to me: "See, that is what I was saying all along!" And I say the same to you. We give a false impression of the Prisoner's Dilemma to game theory neophytes here. Paul Beardsell 13:35, 13 Aug 2004 (UTC)

Would "individually better" satisfy your objection? That's always true, no matter what the other guy does. Wolfman 04:31, 20 Sep 2004 (UTC)

who invented it?

I would like to know, who invented the problem? Samohyl Jan 16:45, 12 Mar 2005 (UTC)


I don't think anyone knows for sure. (Someone please correct me if I'm wrong.) Plato has a problem called the paradox of attrition. This is an n-person prisoners Dilemma. A two person prisoners dilemma occurs in Hobbes' Leviathan (see his response to the "Foole" in Chapter 15). This is the most I know. I can add a section, but my knowledge about this is only spotty. Anybody know any more? --Kzollman 21:20, Mar 12, 2005 (UTC)

On my recent extensive edits(25/05.2005)

I just wanted to say that please do not revert my edits just because I edited a Featured Article. Let's discuss what you think is wrong here and take other peoples opinions. It will make for a better article.

First Comment Evarh

Hi, folks. My name's Doug, and I just made a Wikipedia account tonight. This is my first comment on the WP.

I just wanted to say that this article on the Prisoner's Dilemma is certainly the most interesting and thought provoking dissertation that I have read in the past three years or so, possibly in my entire life (a humble twenty-three years, but I have noticed a couple of grey hairs these past few months or so). I'd love to congratulate the person who wrote it on her or his accomplishment, but it is undoubtedly a group effort, a fact that serves to further bolster my optimistic view that humanity has the capacity to rise up above our own Dilemma and do good for our fellow Prisoners, so to speak.

This is an inspiring piece of work; indeed, it is the reason that I made an account tonight. Kudos all around.

the lead

The prisoner's dilemma is a type of non-zero-sum game (game in the sense of Game Theory). In this game, as in many others, it is assumed that each individual player ("prisoner") is trying to maximise his own advantage, without concern for the well-being of the other players.

I think this lead could use some work. The part about what kind of game it is can probably be made clearer.-Grick(talk to me!) 07:30, July 25, 2005 (UTC)

This is *not* a good introduction

In the introduction section, it now says the following:

It is not necessary to assume that both prisoners are completely selfish and that their only goal is to minimise their own jail terms. However, the cases where either or both prisoners are sufficiently altruistic that they would cooperate, even knowing that the other prisoner will defect, are quite trivial. On the other hand, assuming that the prisoners are completely selfish, as is often done in non-philosophical developments of this model limits the explanatory power of the model. Namely, it does not allow the assignment of an unobservable category of intention to each model. More intuitively, the model does not fit with some (real) people's claim that they seek consensus partly out of altruism. The simplifying dismissal of these altruistic intentions such as above may be made in a more complex model by supposing that the propensity towards altruism is already considered when the relationships among changes in each agent's utility are stated.

This is in my opinion not a good introduction. It is stating an opinion, hard to read, and if this criticism should stand in this article at all, it should not be in the introduction. I think some people have apparently problem with the basic idea that the "dilemma" in the Prisoner's Dilemma derives from selfish intent by the prisoners to serve a jail term as short as possible. I'll change this section to a more sensible introduction. Martijn Faassen 20:20, 26 October 2005 (UTC)

The dab header is such a let-down. All articles should have sufficient background to make it available to the reader (by explaining a bit of the context) - that is not to say the readers have to be taught game theory, just that it should be clear to the reader without having to have such a dab header. Elle vécut heureuse à jamais (Be eudaimonic!) 11:38, 15 January 2006 (UTC)

Chicken

Over at Wikipedia:WikiProject Game theory it has been suggested that the Game of chicken be removed from this page. I think this is right. I don't see how Chicken is any more like the Prisoner's dilemma than any other 2x2 game. If there are no objections one of us will remove it and replace it with a breif summary of the Centipede game and the Diner's dilemma both of which are very similar to the Prisoner's dilemma. Sound good? --best, kevin ···Kzollman | Talk··· 17:57, 12 November 2005 (UTC)


overextensive use of the first and second person

Much of the article could be phrased without use of the first and second person. This makes the article as formal and professional as possible. If this isn't cleaned up soon, I'm afraid I would have to nominate it for FAC removal. Sentences like:

When your opponent defects, on the next move you sometimes cooperate anyway with small probability (around 1%-5%). This allows for occasional recovery from getting trapped in a cycle of defections.

Is too informal and rude to the reader (as an encyclopedia) and is not a standard of a featured article, because second person should never be used in an encylopedia when portraying a hypothetical scenario. If we're aiming for professionalism, we should avoid "forcing" a hypothesis on the reader by using the second person. It doesn't need to be used in that scenario - it could be phrased in either the passive voice or a third person "the player's", rather than "your". Elle vécut heureuse à jamais (Be eudaimonic!) 11:53, 15 January 2006 (UTC)

I've purged all the "you"s from the article, except where necessary. Johnleemk | Talk 11:58, 15 January 2006 (UTC)

Pictures needed

This could benefit from picuteres and such. Can anybody think of some suitable ones? Even color graphs would be a great boon.--Piotr Konieczny aka Prokonsul Piotrus Talk 20:44, 5 February 2006 (UTC)


Pavlov or Simpleton

Very readable article; objecting to "you" is just prissy.

The article doesn’t mention Pavlov or Simpleton (the same thing I gather). Somewhere I read that it was able to take over if, beforehand, Tit for Tat had cleaned out the nasties. That is, it would displace Tit for Tat. As I recall, Pavlov simply did whatever worked before, this being analogous to our tendency to do this time whatever worked for us last time. - Pepper 150.203.2.85 01:35, 12 February 2006 (UTC)


Cooperation

The cooperation part of the game that beat tit-for-tat upon recognition is something secret societies do with each other. President Bush's secret society Skull and Bones likely has classic secret society rules that means you do not purposely interfere with each other within the society allowing cooperation in a competitive environment where everyone else creates loser dilemmas by non-cooperation and interference with each other (the secret society prospered with itself and has better opportunities as a result). The key is recognizing society members, and then they played the Prisoners dilemma with a specific goal for the secret society.

interesting application of game theory and the prisoner's dilemma

i recently read the book Games Prisoners Play: The Tragicomic Worlds of Polish Prison by Marek M. Kaminski. in it, he applies game theory to analyze the actions of polish prisoners. the examples are taken from his experience as a political prisoner. i thought the book was well written, and provides an excellent "real" example of the prisoner's dilemma. i didn't want to tread on the wikipedia article so i thought i'd mention it here.Lunch 07:34, 16 March 2006 (UTC)

First Example

the first example that is shown has the suckers penalty at more than double the punishment penalty which is incorrect in my view as the overall penalty is suposed to get worse with each betrayal so that it is always worse for the group to betray but always better for the indervidual.

(c,c)=(2,2) (c,b)=(5,0) (b,c)=(0,5) and (b,b)=(4,4)

is a much more classical form of the dilemma.

no temptation

the section of text reading;

If the game is iterated (played more than once in a row), the mutual cooperation total payment must exceed the temptation total payment, for reasons explained later:

2R > T+S

seems strange. without trying to do original work here, just as a reader, the two following things jump out; A) I have been unable to find where these reasons are explained later (probably me being blind) B) this seems to remove the temptation to defect- if the optimal position for both players is to cooperate, then they will cooperate... this changes the fundimental assumptions from the basic game and thus the viability of tit for tat and changes for other iterative strategies become less about the iterations and more about the (enhanced) rewards for cooperation... that formula says this matrix would be valid;

Cooperate Defect
Cooperate 5, 5 -5, 6
Defect 6, -5 -10, -10

in short, shouldn't it be;

T+S > or = R Darker Dreams 22:51, 21 April 2006 (UTC)

The reason was lost somewhere along the way. I've added it now. As for your other comment, the inequality in no way changes the nash equlibriam in the single game, it has an effect only on the iterated version where it makes the equilibriam tend to the Pareto Optimum. Loom91 13:18, 24 April 2006 (UTC)

Perfectly Rational

It doesn't seem to me as if a perfectly rational prisoner would choose to defect at all in the first place, assuming his prison-mate were also perfectly rational. Since they are both perfectly rational, and they both have access to the exact same information, they would both end up coming to the same conclusion. And since they are both perfectly rational, each would be aware of that fact. So they both know that the only two real possible outcomes are that they both defect or they both cooperate. They will be much better off by both cooperating than they would be by both defecting, so realizing this, would they not choose to cooperate? It seems that you would only get people defecting when they aren't perfectly rational, or they know their partner isn't perfectly rational. Uniqueuponhim 05:46, 25 April 2006 (UTC)

  • It's easy to run into such logical fallacies when using the subjective intricasies of language. That's why game theory is done in the precise language of Mathematics, and it can be proven that the Nash Equlibriam of this game is (defect,defect) and not (cooperate,cooperate). But you make a good point. Loom91 07:45, 25 April 2006 (UTC)
You say that I've run into a logical fallacy, but you do not indicate where. Logically, what would go through each prisoner's mind is the following: "Whatever conclusion I come to, my counterpart will come to the same conclusion and make the same choice, as we are both perfectly rational. Thus, if I choose to defect, he will necessarily choose the same thing. If I choose to defect, the only possible outcome is that I serve two years. However, if I choose to cooperate, then my counterpart will as well, and thus, the only possible outcome is that I will only serve six months. Serving six months is a better outcome than serving two years, so the rational choice would be to cooperate."
The key to this logic is that both prisoners, if perfectly rational, must arrive at the same conclusion. Since they are both perfectly rational, they will also realize this, and know that there is zero possibility of them choosing differently. They are thus only left with the two possibilities of both cooperating or both defecting. Among these, both cooperating is the best choice and so they will both choose that. The only time this wouldn't hold would be when both prisoners are not perfectly rational. Uniqueuponhim 19:14, 25 April 2006 (UTC)
    • The choices of the two prisoners are technically independent, in the sense that they will both do only what their logic tells them to do and is not influenced by the other prisoner. Now from this perspective the questions of what the other will do and what I'll do are not different, though related. I first evaluate the question of what the other prisoner will do and find that whatever the answer is I'll always be better off by defecting, irrespective of whether he coperates or defects. That is why I defect. Loom91 07:35, 26 April 2006 (UTC)
A key point here is the value (or cost) of the "sucker's payoff", that is, what I get if I cooperate but you defect (and the related value of "temptation", that is, what you get for the same result). In Prisoner's Dilemma this is usually a large negative, much worse than the benefit accrued if we both defect. Given this and, as Loom91 notes, independence of the two players (no discussion, no shared history, no knowledge of the other's decision), the only rational approach is to defect. Essentially, why risk the sucker's payoff against someone you don't know and with whom you haven't agreed a strategy beforehand with? Of course, all this changes if, for example, iterative games are played, or if the payoffs are slanted differently (but then, it's not Prisoner's Dilemma then). Hope this helps, --Plumbago 08:54, 26 April 2006 (UTC)
That still doesn't make any sense. My point is that with two rational prisoners, they will both realize that the other one must always come to the same conclusion as he does. I do understand the logic that you are using that states that either prisoner is always better off defecting no matter what the other prisoner chooses, but that doesn't take into account the fact that the prisoners cannot make two different choices, and that they will both be able to deduce this fact.
Without this deduction, it will happen as you explain it: each prisoner will rationally think out: "If he cooperates, then if I defect, I get no time versus six months if I cooperate, so defecting is better in that case. If he defects then I get two years for defecting and ten for cooperating, so again, defecting is better. Thus I should defect." I understand this.
However, with the deduction, it is altered significantly, by removing two of the possibilities. To reflect that, let's reword your logic a bit, to take the deduction into account: "If the other prisoner defects, then he will only have done so when I have as well. So if I defect, we both receive two years. If the other prisoner cooperates, he will only have done so in the case that I will as well. So if I cooperate, we both receive six months." Basically, because you've removed any possibility of the prisoners choosing differently, the prisoner is essentially deciding between two years if he defects and six months if he cooperates, so he will always choose to cooperate. Uniqueuponhim 14:06, 26 April 2006 (UTC)
It seems like you don't understand what rational means in this context. It's not even true that two rational players must employ symmetric strategies in a symmetric game(see Game of Chicken)Profundity06 19:47, 28 April 2006 (UTC)
This is one of those questions where people just talk past each other; it brings up fundamental philosophical questions relating to free will. See also Newcomb's paradox. --Trovatore 13:49, 26 April 2006 (UTC)
Loom91 writes "they will both do only what their logic tells them to do and is not influenced by the other prisoner" which is not entirely correct, in that players base their actions on expected probabilities that other player will play one or the other strategy. The Prisoner's Dilemma has little of this aspect however, since there is a dominated strategy. No matter what the other player cooses, a prisoner's payoff is always higher if he/she plays Defect. If the other player plays Cooperate, then Defect pays more. If the other player plays Defect, then Defect pays more. Phrased this way, it's hard to see how "Perfectly Rational" players are supposed to choose Cooperate. There are cases (see e.g. subgame perfection) in which the Nash Equilibrium isn't something that really squares away with common sense, but the PD is a poor example of such a thing. Pete.Hurd 14:42, 26 April 2006 (UTC)
Ah, but the problem is, you cannot phrase it that way, because what the other prisoner chooses is wholly dependent upon what you choose. The phrase "If the other player plays Cooperate, then Defect pays more" is irrelevant, because such a situation will never arise. If the other player plays cooperate, it is only because you have both arrived at the conclusion that cooperating is better, and thus, are both playing cooperate. With the knowledge that such a situation will never arise, neither prisoner can expect to possibly get zero years by choosing defect, nor can they expect to serve ten years by choosing to cooperate. The choice boils down to simply: six months for cooperating and two years for defecting.
Perfectly rational individuals will realize that NO MATTER WHAT the other player chooses, they are better off defecting and thus, independent of what the other player does, they defect. This particular game doesn't involve much of strategic interest. It doesn't make much sense to say that two people are rational and they each know that but then go on to contradict the usual understanding of rational in game theory-- an expected utility maximizing agent.Profundity06 19:25, 28 April 2006 (UTC)
There isn't even a Nash equilibrium at Defect,Defect either. The Nash equilibrium exists at a point at which neither prisoner stands to gain anything by changing their choice. However, at Defect,Defect, both prisoners know that if they change to cooperate, then the other prisoner will as well, so one prisoner changing from defect to cooperate doesn't change it from Defect,Defect to Cooperate,Defect, but rather changes it from Defect,Defect to Cooperate,Cooperate. Thus, they have gone from serving two years to serving six months, so they certainly stand to gain by changing their choice, and so no equilibrium can exist at Defect,Defect (as long as both prisoners are perfectly rational, of course). Uniqueuponhim 20:32, 26 April 2006 (UTC)
  • "is irrelevant, because such a situation will never arise" not so, what would happen if you were to do something is very important in deciding what to do.
  • "There isn't even a Nash equilibrium at Defect,Defect either" I think perhaps you've misunderstood what a Nash equilibrium is, or how game theory works.
Pete.Hurd 20:52, 26 April 2006 (UTC)
Using the logic of unique, we can construct the following circular argument. If prisoner A determines that they are both going to play Cooperate, the it will be better for A to play Defect. But once A determines that he is going to play defect, it follows that the other is going to play defect too and it will be better for A to play cooperate as B will also lay cooperate then. To be frank, the issue is not completely clear to me either. I think the question is whether two real-world players who are rational in the real-world sense will play the Nash Equlibriam. Loom91 17:36, 26 April 2006 (UTC)
"it follows that the other is going to play defect too *here* and it will be better for A to play cooperate as B will also lay cooperate then" "here" marks the end of the stuff that is correct. Once B decides to defect, A still gains more by defecting. A discoordination game would have the reaction correspondence implied by the rest of your description. Whether (more often, why not) real-world players play the Nash is a very active field of research, see experimental economics. Pete.Hurd 18:27, 26 April 2006 (UTC)

Pregnancy? What the...?

Found this line in the section A similar but different game: The pregnancy of this problem is suggested by the fact...

So, uh, I'm wondering if the word pregnancy is at all correct here... ``T. S. Rice 23:06, 25 May 2006 (UTC)

Nothing wrong about it, but the wording is not very clear. Feel free to improve. Loom91 06:35, 26 May 2006 (UTC)


game theory selfish?

the current version claims game theory assumes selfish behavior. this is not true. players maximise their payoffs, but one player may take pleasure or pain from another's success. Evolutionary game theory, for example assumes payoffs are proportional to genes in common.

For those coming to pd for first time looking at the mechanics of the classic case is surely best way to understand the point.

Hofstader's magical thinking should surely be treated as that. given the game the individual raises their payoff by defecting. really it should be struck out altogether. —Preceding unsigned comment added by DEDemeza (talkcontribs)

What is to be done?

This may be a featured article but its wrong in many respects. Start with the first paragraph. It claims that game theory assumes each player has no "...concern for the well-being of the other players". This is patently untrue. If players are concerned for others the payoffs change but the rest of the mechanics stay the same. Suppose that jail terms are as described in the "classic example" but each player is fully altruistic putting as much weight on the other player's utility as on their own. Now for each cell each player's payoff is the sum of the prison terms of them both.It is then straightforward that there are two Nash equilibria; both confess and neither do. The point is that there is no conceptual difficulty in incorporating other regarding preferences and this is standard doctrine.

Second, the reference to a banker is odd. In the "classic" example I suppose you would have to say that the banker is the police, but this seems odd terminology. Or what if you took a tragedy of the commons case. each player decides whether to put one or two sheep on the commons. Where is the banker now?

So the first paragraph should delete claim that game theory assumes self regarding behaviour and the reference to the banker.

Many paragraphs have worse problems than this. I don't really know where to start especially as it will all get deleted. —Preceding unsigned comment added by DEDemeza (talkcontribs)

Game theory assumes that the payoffs reflect the players interests, and that players have no concern for other players payoffs. "Suppose that jail terms are as described in the "classic example" but each player is fully altruistic putting as much weight on the other player's utility as on their own." well then, that's a totally different game once you change the player's payoffs to be something else, it's not the Prisoner's Dilemma at all anymore. This is not a problem with the article, it's you wanting the game to have different payoffs than it does. "The point is that there is no conceptual difficulty in incorporating other regarding preferences" ummm. sure, one could come up with games that do this, but then what you have is something quite different from the topic of this article. Pete.Hurd 23:02, 22 August 2006 (UTC)
Yeah, I'm not sure what the "Banker" is doing in there... Needs a subtle copyedit tweaking, that's all Pete.Hurd 23:09, 22 August 2006 (UTC)

I agree with what pete hurd says but the previous version claims that game theory cannot handle other regarding preferences. This claim is irrelevant to the PD and wrong so the claim should be deleted.—Preceding unsigned comment added by DEDemeza (talkcontribs)

Game theory cannot handle preferences which are not made explicit in the player's own payoffs. PS Please sign your comments with four tildes, it makes it easier on everyone else. Pete.Hurd 23:17, 22 August 2006 (UTC)

Again I agree but its not going to be helpful to say this in the PD article. But I am mainly hoping I have mastered the signature routine.DEDemeza 23:28, 22 August 2006 (UTC)

Still on the first para, wouldn't it be better to say that cooperate strictlt dominates rather than betray is strictly dominated. Agreed that with only two strategies one implies the other but more generally the relevant concept is the former.

I would not use both "betray" and "defect" even though it is explained synonoms. Confusing at this point. Both terms relate to a context that is not yet explained. Actually, I( doubt this paragraph will mean much to most readers without the basics of game theory, which will be a high proportion of visitors. Many of the rest will already know it. I would go straight to the "classic" example.

Para two. I strongly feel its inappropriate to have the Hofstadter stuff here. It is not made clear that virtually all game theorists regard this as a fallacy (one displayed in earlier contributions to this discussion). So it will only confuse novices. at the least the fallacy should be exposed. Granted the play is independent and the game one shot, if a player switches from the putative cooperative equilibrium in every possible outcome that player will have a higher payoff. Hofstadter probably wants to think the world is like the ideal of a sixties commune but bending logic ios not the right way to do it. Cooperation does occur in the world but its either in the preferences of because many of the relevant games are repeated.DEDemeza 06:59, 23 August 2006 (UTC)

Iterated Prisoner's Dilemna

The section on Axelrod's writing is interesting. However, I've been told that empirical research tends to show that even if players start out with cooperative strategies, they quickly(within 50 or so rounds) converge to the defection outcome. I should look into it some more.Profundity06 19:32, 28 April 2006 (UTC)

"By analysing the top-scoring strategies, Axelrod stated several conditions necessary for a strategy to be successful." The conditions specified by Axelrod are not necessary for a successful strategy. I will not dwell into this right now, but in an evolutionary setting (which his tournament was) success of a strategy depends on types of the other strategies, so it's the ecology or strategy population in question which dictates what's successful and what's not. If we assume standard replicator dynamics, which is probably the most widespread model of deterministic evolution, nice and retaliatory strategies are most robust, that is, they need the smallest stabilizing frequency which converges to 0.5. In other words, the incumbent strategy population cannot be driven to extinction by any invading group that is not at least as big as it is (Bendor & Swisstak The Evolutionary Stability of Cooperation. The Am. Pol Science Review, vol. 91, iss. 2 1997).Leppone 23:35, 18 April 2007 (UTC)


Generalized form

I don't understand the following passage;

"If the game is iterated (played more than once in a row), the mutual cooperation total payment must exceed the temptation total payment, because otherwise the iterated game would not have a different Nash Equilibrium (see section on iterated version):

2 R > T + S"

It is well known by the folk theorem that the earlier condition T > R > P > S yields an infinity of equilibria in an infinitely repeated game or one with a random end period. So the iterated PD adds a lot of equilibria even without a change of payoffs. A minor pioint is that it is unclear whether the new condition is intended to replace the previous one (then it allows cases that are not PDs) or be a further requirement.

In fact the section is not really a generalisation at all. it just provides a slightly different context. Generalisations, though not specially interesting ones, would be to many players and many strategies. I suppose you could say there is a generalisation in replacing the numbers in the payoff matrix by symbols, but that is not very interesting and if it is to be done does not require a special section. the terminology for the generic payoffs seems to me terrible. But let's not go into that. The whole section should be deleted.DEDemeza 21:55, 23 August 2006 (UTC)

"The whole section should be deleted" not everything that you don't understand ought to be immediately deleted. Pete.Hurd 22:09, 23 August 2006 (UTC)

I am English. What I really mean is that the revised condition is wrong. If you think it is right explain.(the entry on repeated games is fine and confirms my claim). However, my reason for advocating deletion of the whole section is not that that particular point is wrong but that at best it adds virtually nothing to the previous section.

What has to happen before it is reasonable to delete the section?DEDemeza 22:36, 23 August 2006 (UTC)

I'm afraid I don't understand what you are disagreeing with. As I recall the folk theorem has some requirement for the payoffs to be "feasible" or something like that. In particular that the potential punishment from defecting must be sufficiently harsh so as to warrant cooperation. I don't think anything with T > R > P > S will satisfy that, although I may be wrong. --best, kevin [kzollman][talk] 22:59, 23 August 2006 (UTC)

You are wrong. The folk theorem implies that for any PD, with a sufficiently low discount rate (or high probability of continuation) both players cooperating can be sustained as an equilibrium. So too can lots of other outcomes. You might look at the paper on Aumann on the Nobel Prize site. Though as I say this not the main reason for suggesting deletion of the section.DEDemeza 23:13, 23 August 2006 (UTC)

The idea that any game that follows those inequalities classifies as a PD will be new to most readers. It is not commonly understood, outside of mathematics, that classes of mathematical objects can all share the same interesting properties while being instantiated differently. It seems important to me that we explain that a game need not have: (a) those exact jail terms, (b) have jail terms at all, or (c) even be about prisoners. The section, I think, at least gestures in that direction. I agree that it could probably do to be rewritten. On a stylistic note, you would probably find other editors more open to your suggestions if you refrained from calling their writing "terrible." --best, kevin [kzollman][talk] 00:02, 24 August 2006 (UTC)


In fact the article opens with a fairly general statement of the PD. As I said in an earlier talk, this seems to me to probably be inappropriate for most readers who will not grasp the point. So starting with an example is best. However if having gone through this a reader does not realise that the result does not depend on the particular numbers chosen, this article is surely not for them. My inclination would be to start with the classical example pull together the essential requirements and outline a few other applications. Sorry if I was intemporate. Partly the result of my previous talk where it did not seem to be appreciated I was being critical so I thought I need to be more direct. I did not explain why I thought the terminology for the payoffs should be changed as I was arguing on other grounds that the section should go. But if it stays at some point I will make my case. DEDemeza 08:38, 24 August 2006 (UTC)

I see that Loom91 has reinstated the condition for a repeated PD to have an equilibrium other than mutual defection but added a reference to the "Selfish Gene". I don't have a copy to hand but.. 1) The "Selfish Gene" is a wonderful book but it is not an authority on game theory. For example Dawkins would not at that time have known of subgame perfection and probably still does not. 2) It is an absolutely standard result in game theory that any infinitely repeated PD has a cooperative equilibrium when the interest rate is sufficiently low. See for example on the Nobel Prize site http://nobelprize.org/nobel_prizes/economics/laureates/2005/ecoadv05.pdf starting p13. So no extra condition on the payoffs of the stage game is needed if an IPD is to have a cooperative equilibrium. 3) Even if there is some sense in the condition (which I doubt), in the context of an entry it cannot just be plucked from the air without explanation. I have some professional expertise in game theory though I must admit it would not be correct to call me a game theorist. If though it is a mystery to me why the folk theorem is not applicable or what the rationale is of this condition then I do think there is something wrong in what is intended as an expository piece. Certainly when the section was previously defended on the grounds that some readers seeing a numerical example would not understand that these numbers are not necessary for the conclusions to follow.

Well, I have found it instructive to take part in the editing process but I can see it will be too frustrating to continue. This may appear to be sour grapes, but I do think the article is wrong in many respects and in others is written so as to give a missleading impression. I won't be adding it to the references on my handout. Nice photo though. x x xDEDemeza 14:40, 24 August 2006 (UTC)

I was going to make that my last but just to add this point. The extra condition means that aggregate payoff is highest with both players cooperating rather than one player cooperates and the other defects. It is debatable that maximising aggregate payoff is socially optimal but if that were the case and it is desired that both cooperating is socially optimal then the payoffs must satisfy this condition. However this applies whether or not the game is iterated whereas the Wiki passage says the condition is something to do with iteration.DEDemeza 15:07, 24 August 2006 (UTC)

The reason for imposing this condition was explained with an example in Selfish Gene, I suggest you read the chapter Nice Guys Finish First and see if that changes your proffessional opinion. In any case, you are welcome to add your opposition without deleting the existing referenced statement to the article as long as you provide references yourself. Loom91 09:06, 25 August 2006 (UTC)

I don’t have to read Dawkins to know that what appears in Wiki is wrong. I do have to look at the Selfish Gene to know whether what he says is correct. I now have done so and Dawkins is correct, just a bit silly . He mentions the condition p204 but does not “explain” its role till p211 and then a bit obliquely. The Wiki passage is not at all consistent with what Dawkins says. According to Wiki “If the game is iterated (played more than once in a row), the mutual cooperation total payment must exceed the temptation total payment, because otherwise the iterated game would not have a different Nash Equilibrium (see section on iterated version)” As I pointed out in a previous talk (with reference), it is a basic result of game theory that whether or not the extra condition is imposed, an infinitely repeated PD has an infinity of Nash equilibria and also an infinity DEDemeza 19:52, 25 August 2006 (UTC)of subgame perfect Nash equilibria. So it is not the case that the condition is needed to create a different Nash equilibrium to cooperate cooperate if the game is played repeatedly. Fortunately for him, Dawkins does not claim this. What he observes is that if T+S> 2R then the aggregate payoff from cooperate-defect is higher than cooperate-cooperate. If the game is repeated suppose that cooperate-defect is played every period but the players alternate which defects. Then, ignoring discounting (Dawkins does not mention this), both players are better off than if cooperate-cooperate were played every period. So if you define a PD as a game in which cooperate cooperate is Pareto efficient in a repeated game you need the condition. But it is nothing to do with the condition being needed to change the Nash equilibrium as the Wiki entry claims. Actually, I do not think that it is standard amongst game theorists to define a “true” PD as Dawkins does. None of the definitions found on Google do. Here is a typical one “In game theory, refers to a case in which players select their dominant strategies and achieve an equilibrium in which they are worse off than they would be if they could all agree to select an alternative (non-dominant) strategy” so to satisfy this definition the extra condition is not required. Of course Dawkins is free to adopt whatever definition he wants but an encyclopedia should try to give the standard usage.


In summary, if the condition is to be retained the Wiki passage must be rewritten. It is not required to create new Nash equilibria. But it would be better to drop the condition as it has no real role. The whole section should be dropped. The article requires considerable rewriting.

I have reworded the sentence in the article to conform to the facts agreed upon by everyone. I will get a copy of Dawkins and read it sometime next week so that I can chime in on the matter. I don't mean for this version to be the "final" version or anything, just something that minimizes error for the time being. May I again suggest, DE, that you try to be more respectful of others hard work. If I had written anything that is currently in the article, I would be put off by your general tone. Even if everything in the article is wrong, it is the hard work of a volunteer, who is probably not an academic and not used to receiving such harsh criticism. --best, kevin [kzollman][talk] 00:00, 26 August 2006 (UTC)

The additional condition isn't actually original to Dawkins; for example, Axelrod and Hamilton in 'The evolution of co-operation' (Science 1981, 211, 1390-6) (on which the relevant chapter of The Selfish Gene is largely based) state "The game is defined by T>R>P>S and 2R>T+S" (or a similar form of words - I'm quoting from memory). It therefore seems wrong to credit the suggestion to Dawkins - does anybody know who actually suggested it first? Aretnap 20:29, 13 December 2006 (UTC)

I just want to add that as DE said, 2R>T+S condition is there to ensure that C,C and D,D are absolutely the best and worst outcomes, but however, unlike he said, it's a pretty common approach to assume this condition for a game to constitute standard prisoner's dilemma, also among game theorists. It's to ensure, as said, that the players cannot escape the dilemma by alternating between D and C in a coordinated manner in turns, and applies also if we allow mixed strategies in the one shot game (for the reference, Rasmusen 1995, p 34. If gradute text books on game theory are not enough, MA Nowak, K Sigmund define PD with such condition in their numerous articles as well. This is obviously an appeal to authority, but rightfully in this case)Leppone 23:12, 18 April 2007 (UTC)

New real world example added.

I've added another real world example, which is a clearer case of multiple subjects than the Commons example.

"Another instance of multiple players in a PD can be seen in a curved exam in schools. If everyone cooperated and intentionally did as poorly as possible, then everyone would receive the same grade, which would then be curved to 100%. However, if any single person defected, then everyone else's grade would suffer tremendously, at the price of the defector's guaranteeing his own 100% grade. The case is almost always that everyone defects, and only a small portion of the subjects will get the high grade."

Why is game theory background required to read this?

Currently (Dec 9/06), the article starts with this italicized sentence: "Many points in this article may be difficult to understand without a background in the elementary concepts of game theory." Why? I think that level of complexity should be reserved for a game theory textbook. What's wrong with an article that can educate the layperson? HMAccount 02:29, 9 December 2006 (UTC)

Wikipedia tries to balance between two types of encyclopedias. One, like the Encyclopedia Britannica, is for general audiences. The other is a specialist encyclopedia. We have many articles which could never be realistically read by a wide audience, but are nonetheless useful to specialists (e.g. woodin cardinal). For articles like this one, we try to balance both interests. We hope that at least some of the article is readable by a wide audience, while also including some material for the specialist. I think the article does this well, but if you disagree I would be interested to know it. Is there any part in particular that you found difficult to read or understand, which you would like written more clearly? --best, kevin [kzollman][talk] 20:32, 9 December 2006 (UTC)
Hi, and thank you for responding. I came to this article after a player on the TV show Survivor mentioned the prisoner's dilemma, but that wasn't the first time I've heard the concept referred to. There probably is a general audience for it. I don't mind both audiences being addressed by the article.
I found the hardest part to read is the intro, which is full of technical terms. Some examples: game theory, non-zero-sum, cooperate, defect, dominated, equilibrium, unique equilibrium, Pareto-suboptimal solution, iterated, equilibrium outcome, Nash equilibrium.
In addition, the prisoner's dilemma is said to be a game, but I don't see how it is something people play. Is it best described as a thought experiment? Hope these thoughts are helpful. I do appreciate a lot of work has gone into this page so far. HMAccount 20:37, 10 December 2006 (UTC)
The intro of an article must summarise all the important points of the article, which must include the more technical points. In fact, a summary will necessarily be more readily understood by a specialist than a layman because of the lack of scope to explain things. That is done in the next few sections.
In the intro we must present the main features of Prisoner's Dilemma, which can not be done in a limited space without resorting to standard game theoretic terminology. When writing the intro we can not aim to be completely understood by a general audience without sacrificing information.
As for why it is called a game, game in this context is a highly technical term with a mathematical definition which Prisoner's Dilemma fits. That is why it is called a game. This is another part that can only be understood by someone with a background in basic game theory, hence the header. For a layman, I think the term game does a decent job of capturing the essence of the concept. Note the difference between sports and game. Loom91 09:28, 11 December 2006 (UTC)
Thanks also for your response. I hope my comments don't come across as being unappreciative of how much thought has gone into the article, and I am well aware that the introductory paragraphs have probably had more thought put into them than most! I suppose I really only want to point out that from my perspective, the header tag sets a tone of unfairness to the lay reader that perhaps readers more familiar with the subject matter won't have noticed.
In an effort to be helpful rather than just complaining, I'd like to suggest that the editors on this article add a nontechnical first paragraph to increase the accessibility of the article. Some other articles on technical terms that do this well include infinite monkey theorem , proprioception and signal noise, which last I think shows well how to include both a general and a specific definition.
I'll take a stab at this here; would the regular editors agree with an edit like this? (I think if the first paragraph was accessible, the italicized tag that a knowledge of game theory is required would be unnecessary. Once past the opener I definitely enjoyed the article.)
The prisoner's dilemma is a game first devised by mathematicians Merrill Flood and Melvin Dresher, and given its name by mathematician Albert W. Tucker. In the branch of applied mathematics called game theory, the prisoner's dilemma is one of the classic models used to explain how human beings cooperate or conflict with each other while making decisions. Researchers in economics, biology and sociology also find the prisoner's dilemma set-up useful in thinking about how people will behave.
Unlike other two-person games where there is always one winner and one loser (a zero-sum game, for example chess or tennis) the prisoner's dilemma game offers a third option, where the two players can increase their success by collaboration (a non-zero-sum game). The game hinges on conflict and interaction between individual and collective gain.
In the technical terms of game theory, in the prisoner's dilemma, as in all game theory, the only concern of each individual player ("prisoner") is maximizing his/ her own payoff, without any concern for the other player's payoff. In the classic form of this game, cooperating is strictly dominated by defecting, so that the only possible equilibrium for the game is for all players to defect. In simpler terms, no matter what the other player does, one player will always gain a greater payoff by playing defect. Since in any situation playing defect is more beneficial than cooperating, all rational players will play defect.
(rest of intro as it stands) HMAccount 17:44, 8 January 2007 (UTC)
You are basically suggesting that the technical terms used are given a short explanation in the intro itself. The only problem with that is the increase in length. A reader looking for information should be able to tell by consulting the intro whether the article interests him and what the main features of its subject are. Too long an intro defeats this purpose. In your suggested intro, the first paragraph is good. But the second is devoted to explaining what is meant by zero-sum game and non-zero-sum game, a topic that is not directly related to the specific game of Prisoner's Dilemma. The goal is to achieve a balance between information and conciseness. Loom91 08:42, 10 January 2007 (UTC)


Prisoner's dilemma or Prisoners' Dilemma"

Hi all, I'm not a native speaker of the English language, but shouldn't it be prisoners' dilemma (or prisoners dilemma) instead of prisoner's dilemma? Sinas 09:33, 6 February 2007 (UTC)

While there are two prisoners involved, we are meant to put ourselves in the position of one of the prisoners acting indepedently, and hence it is the dilemma of only one prisoner, hence Prisoner's Dilemma. If we were thinking about how they should act together, then it would be the Prisoners' Dilemma. LukeNukem 10:56, 8 February 2007 (UTC)
It's only from the viewpoint of a single prisoner that the dilemma exists. For the two prisoners as a whole, there's no dilemma, both cooperate is the most profitable outcome no matter what. Loom91 06:23, 9 February 2007 (UTC)

What does the Closed Bag Exchange talk about?

I think this section needs a rewrite, because i can't see what the point of the last part is, neither in itself nor in relation to the prisoners dilemma. It should either be more clearly explained or removed, i think. Bouke 10:16, 12 March 2007 (UTC)

Waited for 7 days for comments, removing the section again Bouke 08:32, 19 March 2007 (UTC)

I did not write that section, but I do not see your objection to it. What exactly do you consider the problem with it? Some of it was written in an inappropriate lecturing tone, but I added back only the part that had something to say. And since the section was in the article untill this discussion started, please keep it in there untill this discussion ends and a decision is reached. Loom91 08:08, 20 March 2007 (UTC)
IMHO, the last paragraph "In a variation, popular among hackers and programmers, ... collect and exchange information about the bag exchanges themselves?" should be deleted. No sources, poor tone, not clearly related to topic of article. Pete.Hurd 14:53, 20 March 2007 (UTC)
If you don't like the tone, feel free to reword. But I can't see how it is not related to the article. It talks about various factors to be considered in an actual implementation of iterated prisoner's dilemma in a virtual population. That seems related to me. Loom91 08:30, 21 March 2007 (UTC)
Aah, its about an "actual implementation of iterated prisoner's dilemma in a virtual population". I didn't make that out of the original section, and i guess that is the problem with it. In that case, it absolutely needs references. If there are none, it is not important enough to include in this page, and/or original research. I don't know how such things are done, but if no one here can find such references, when is it warranted to remove the section? Or is it customary to let the section stand as it is until anyone can improve it? That may take a very long time if there are no references... Bouke 12:03, 22 March 2007 (UTC)
No hard and fast rule about when to delete. In this case, since there are at least a couple of comments questioning the inclusion, and some time has been given to support/reference, we should consider deleting. At any rate, I support the proposal to delete. For the more general purpose of this article, it seems too specialized (and perhaps should have a separate article targetted at technology applications. That said, the issue of iterated PD games is useful and interesting and should be covered, but perhaps with a more general example.--Gregalton 12:24, 22 March 2007 (UTC)

Generic payoffs

The T,C,N,P payoff matrix seems to be the win-win, lose much-win much, win much-lose much, lose-lose payoff matrix, which makes the following section which deals with the standard Temptation, Punishment etc payoffs make a whole lot less sense. Any objections if I convert it back? Pete.Hurd 20:34, 24 March 2007 (UTC)

OK I'm out to lunch, it was never in T,C,N,P format, it was always like it is now diff, since 15:06, 22 July 2004. I'd prefer a T,C,N,P payoff table to make the text more sensible, as well, instead, opinions? Pete.Hurd 21:58, 24 March 2007 (UTC)
Umm, what are you talking about? Loom91 13:23, 25 March 2007 (UTC)
Whups, \end{incoherantbabbling}, ok, let me try again (with TRPS instead of TCNP, that'll make more sense). The section Prisoner's dilemma#Generalized form, from the start of the third paragraph to the third last paragraph, speaks of the Temptation, Reward, Punishment Sucker's payoffs. But neither payoff matrix in that section uses these payoffs. IMHO, one of them should be deleted and the other changed to:
Canonical PD payoff matrix
Cooperate Defect
Cooperate R, R S, T
Defect T, S C, C
What do you think? Pete.Hurd 15:01, 25 March 2007 (UTC)
Keep the numerical table, as it provides a concrete example. The win-lose table is pretty much pointless and can be replaced by the table you suggest. Loom91 09:56, 26 March 2007 (UTC)

Aumann and IPD

The current draft makes it seem that Axelrod was the first to consider the IPD. This is far from the case and this redraft attempts to give attribution to the early and profound contrubutions of Aumann. DEDemeza 17:27, 5 April 2007 (UTC)

Error in cigarette advertising example?

The prisoners dilemma is defined by the existence of a dominant strategy. If you knew the other player was going to confess you should confess if you are selfish and if you knew the other player was not going to confess you should confess if you are selfish. So you don't have to predict what the other player will do if you are selfish. The cigarette advertising example does not have this property, so it is not a prisoner's dilemma. It does have the property that the Nash equilibrium is not efficient but that does not make it a prisoner's dilemma just as a Cournot oligopoly is not a PD.DEDemeza 17:27, 5 April 2007 (UTC)

pareto optimum

from the definition in the article Pareto optimum, it appears that three of the four outcomes in the prisoners dilemma are pareto optimal.. is that correct? Mlm42 19:05, 12 April 2007 (UTC)

Yes. As the article puts it, PD is an excellent example of where "a Pareto optimum can be unstable and the Nash equilibrium can be sub-optimal." Smmurphy(Talk) 21:05, 12 April 2007 (UTC)
Three of the four are pareto optimal? I believe only one result, both Cooperate, is pareto optimal. All other results are sub-optimal as they result in a lower total score. Loom91
Being Pareto optimal doesn't meant that you maximize social welfare. One way to say that an outcome is Pareto optimal is to say that no change can make any player better off without making another player worse off. Smmurphy(Talk) 15:09, 13 April 2007 (UTC)

Smmurphy is correct. All outcomes are Pareto efficient except both betray, the Nash equilibrium. In the three other outcomes at least one party is worse off by moving to another cell. However, it would be better if the article said "a Pareto optimum need not be a Nash equilibrium and a Nash equilibrium need not be a Pareto optimum." DEDemeza 16:11, 15 April 2007 (UTC)

Pareto optimal outcomes really depend on whether you have cardinal payoffs with mixed strategies or ordinal payoffs, in the latter case the only pareto optimal outcomes are the three you mentioned (not only C,C as I said first): neither of the players can be better off by moving away from any of them without making the other worse off. In the first case, however, there may exist plenty of outcomes, in which both players may end up better off. Everyone ought to read Standford Encyclopedia of Philosophy's article on PD: http://plato.stanford.edu/entries/prisoner-dilemma/

That's a good point, our section on the classic form doesn't mention that we are assuming pure (not mixed) strategies. Should we mention that (or will that confuse more than it will help)? Smmurphy(Talk) 02:46, 18 April 2007 (UTC)
The question of strategy only arises for iterated games. The first few section only discuss the basic, singly-played game. Loom91 07:00, 18 April 2007 (UTC)

In single shot games,a pure strategy and action coincide. However, a talk about a mixed strategy applies perfectly to single shot games (what you call singly played). Say, I may entertain a strategy that guides me to play C with 60% likelihood and D with 40% likelihood every single time I enter a PD that's repetead just once, even if I did it only once in my life. Now that's a single shot game and a mixed strategy.Leppone 22:13, 18 April 2007 (UTC) Note that the article subsequently imposes the condition 2 R > T + S that rules out cooperation not being Pareto optimal because both players are better off under mixed strategies. The example does have this property. Probably confusing though to introduce at this point. DEDemeza 20:59, 22 April 2007 (UTC)

My logic may fail me, but "rules out cooperation not being Pareto optimal" can be said as: ensures that cooperation is Pareto optimal (or in this particular case, the absolutely best outcome)? If so, then we agree, that's a definition of a 'pure' PD (not pure in terms of applied strategies, but rather in terms of payoff transitivity). I reckon it might be confusing to introduce mixed strategies at this paragraph - there's a lot more reworking to do elsewhere.Leppone 07:12, 25 April 2007 (UTC)

We agree. Sorry to be prolix. DEDemeza 13:23, 25 April 2007 (UTC)

Finitely repeated prisoner's dilemma...

This paragraph claims that in a finitely repeated prisoner's dilemma rational players will not defect in every round. Even were this correct, it would be unsatisfactory to offer the explanation that this is because the problem resembles the paradox of the surprise exam. Some hint should be given why the backward induction argument fails. In fact the analogy with the surprise exam is false (the prisoner’s dilemma does not depend on a linguistic contradiction) and the textbook conclusion that players will defect is correct. I propose that the paragraph be rewritten to this effect.DEDemeza 22:46, 23 May 2007 (UTC)

Done. Loom91 10:28, 24 May 2007 (UTC)

Variations on PD

I've read about a variation of PD where 2 R < T + S, called the "graduate school game" in Evolutionary Computation for Modeling and Optimization, D. Ashlock (2006), ISBN 0-387-22196-4. The changed story is as follows: a newly married couple both have undergraduate degrees and are working. "Cooperate" is not going to grad school and continue working, "defect" is to go to grad school. Since having grad degrees (usually) nets higher income but one has to pay for grad school, having both players backstab alternately is better than both going at once or none going at all. The iterated grad school game is allegory.

It was used as an example where there can be no evolutionarily stable strategy, using finite state automata. For proof: assume it does; a finite state automata's first output is predetermined. Then for the next output, their inputs equal (since it's playing a copy) and thus this output is also the same, and so on, unable to achieve mutual backstabbing. This gets R average < (T+S)/2 (assuming it's not mutual defect).

Also for the Axelrod analysis of TFT, Ashlock also shows that those 4 qualities need to be together: take a player that cooperates no matter what (ALL-C). It's nice, forgiving and non-envious. And it fails hard. That's something pretty significant. Obscurans 02:28, 28 May 2007 (UTC)

Fingerprinting

Also, Ashlock and others have come up with a technique called "fingerprinting" that can be used to identify a strategy. Basically, use the unit square. The opponent plays cooperate with chance x, defects with chance y, and plays TFT with chance 1-x-y. Work by Daniel Ashlock, Eun-Youn Kim, and Warren Kurt vonRoeschlaub proved that for x+y>1, if the opponent plays the exact opposite of your last move (dubbed "psycho") with chance x+y-1 (for the upper triangle of the unit square) then the entire unit square is covered by a usually continuous function of (x,y) that gives your average score against that opponent. This page has some fingerprints for common strategies, including TFT which gets (y^2+5xy+3x^2)/(x+y)^2 average score. A useful tool in analyzing strategies in like FSA which might code strategies in roundabout ways. Obscurans 02:28, 28 May 2007 (UTC)

Finitely repeated prisoner's dilemma... 2

It is simply incorrect to say that for a known finite number of iteration the dominant strategy is to defect in all rounds. A dominant strategy is the best strategy no matter what your adversary plays. A simple counterexample is to consider an adversary playing the tit-for-tat strategy in all rounds but the last one and then defecting on the last one. Playing this very same strategy is a better response to itself than always defecting. Therefore, always defecting is not a dominant strategy. The backward induction proof of correctness is wrong when it asserts that given it is best to defect in the last round it is also best to defect in the next-to-last round.

This says nothing about the fact that always defecting is the only Nash Equilibrium that can exist. This is correct. Unfortunately, a Nash equilibirum is not always dominant strategy.

I don't think anyone claimed that defecting in all rounds is the dominant strategy. Few games have a dominant strategy. Always defect is the unique sub game perfect Nash equilibrium. Defecting in the penultimate round is only best if you are facing a rational player because such a player will certainly defect in the last round.DEDemeza 09:01, 7 June 2007 (UTC)

Paying a Fixed Fraction of Total Consumption is a Common PD

People on PD have been verbose & many entries. Will keep mine short. An apartment bldg in late 80's -- we paid 1/64th of bldg's electrical consumption (there were 64 apartments). Leads to a prisoner's dilemma (just think about choice to run A.C. all day while you're out). Similar case: huge group from work go to dinner & figure you know it will be "let's just split it by 10 [there's 10 of you]" Think about getting extra beers, desserts & so on versus being conservative. Who will really be paying for the extras? The important point is that any time a fixed fraction of total consumption is how you have to pay & you have choice as to how much to consume, you set up a full blown prisoner's dilemma. Such a common situation ought to be noted! —Preceding unsigned comment added by 199.196.144.13 (talkcontribs)

Thank you for your suggestion! When you feel an article needs improvement, please feel free to make those changes. Wikipedia is a wiki, so anyone can edit almost any article by simply following the Edit this page link at the top. You don't even need to log in (although there are many reasons why you might want to). The Wikipedia community encourages you to be bold in updating pages. Don't worry too much about making honest mistakes — they're likely to be found and corrected quickly. If you're not sure how editing works, check out how to edit a page, or use the sandbox to try out your editing skills. New contributors are always welcome.

Note that you should try to find a good source for any information you add. Λυδαcιτγ 04:01, 21 June 2007 (UTC)

Actually the example here is a classic Tragedy of the commons, which is already referenced in this article under the "Real-life examples" section. It's already in there. Pete.Hurd 05:22, 21 June 2007 (UTC)

There is already an article on the "Diner's Dilemma" which should be linked. Note though that the Tragedy of the Commons is not in general a PD because each agent's grazing decision will not be independent of the choices of others i.e. there is no dominant strategy. DEDemeza 17:40, 21 June 2007 (UTC)

OK. I wrote some more about "the fixed fraction of total consumption" (including an IMPOSED one upon the participants -- no individual-unit metering of utilities) in the Diner's Dilemma discusion page.199.196.144.16 21:10, 21 June 2007 (UTC)

Spoken Wikipedia Article

Have added spoken wikipedia version of this article - please comment here, or preferably my talk page. Hope it's OK! JebJoya 23:43, 25 June 2007 (UTC)

Circular logic

The clause 'all rational players will play defect' seems highly dubious, in that it begs the question of what rationality is. The link to the 'perfect rationality' article is useless - that piece is a stub, and only defers the definition of rationality to the main article on the subject, which says nothing suggesting that rationality need be equivalent to self-interest.

A couple of possible solutions: delete the first sentence of the first paragraph, or change the word 'rational' to something like 'self-interested' or 'egoist'.

But the following paragraph features the same problem, and it likely crops up a few times throughout the piece. —Preceding unsigned comment added by Jinksys (talkcontribs)

I think that since "rational" has a particular meaning in Game Theory, it might be nice to define it precisely in this article and continue to use it. I think that we should define it here so that our jargon is explained. Smmurphy(Talk) 02:15, 6 July 2007 (UTC)
As Smmurphy points out, rational is standard terminology that may or may not agree with philosophical definitions of ratinality. The issue is explored in the morality section. Loom91 12:11, 6 July 2007 (UTC)

T > S > R > P

"Let T stand for Temptation to defect, R for Reward for mutual cooperation, P for Punishment for mutual defection and S for Sucker's payoff"

Where did this terminology come from? I've seen it used before, but never cited. Anyone know? Llamabr 17:53, 22 August 2007 (UTC)

Good question. I don't know for sure, but my guess is Rapoport. He expresses the game of chicken in terms of T,R,S, & P in Rapoport A. & Chammah A.M. (1966) The game of chicken. American Behavioral Scientist 10:10-15. I don't have a copy of their book (Rapoport A. & Chammah A.M. (1965) The Prisoner's Dilemma Ann Arbor: University of Michigan Press) but that's where I would look next. Pete.Hurd 18:29, 23 August 2007 (UTC)

Arms race

Should we mention that the arms race analogy is flawed? In prisoner's dilemma, the worst alternative for party A is if party B is aggressive and they are not. In the arms race, the worst alternative is if both parties are aggressive (this is like chicken. In a nuclear arms race, if you both continuing developing weapons indefinitely, a nuclear "incident" (through theft, sabotage, insubordination, etc.) will inevitably occur, which could trigger a retaliation annihilating all civilization (even if no incident occurs, this scenario maximizes wasted resources). So that's a worse outcome than unilaterally stopping development of weapons, which lets your opponent rule over you, but gives them no reason to develop enough weapons to destroy civilization. Thoughts (or better, sources)? Superm401 - Talk 10:34, 23 September 2007 (UTC)

From the governments POV, getting conquered is the worst possible outcome. Loom91 19:59, 23 September 2007 (UTC)
I think chicken/hawk-dove, and PD are poor models for arms races due to the 2x2 nature of those games. Perhaps a better model (at least for an evolutionary arms race) might be the dollar auction, but that's pretty close to WP:OR... Cheers, Pete.Hurd 20:57, 23 September 2007 (UTC)
Loom9, I wouldn't say getting conquered is worse than the complete destruction of the human race. But it's probably OR, so unless someone finds a source about this, leave it alone. Superm401 - Talk 03:16, 24 September 2007 (UTC)

Too long an article for such a simple subject.

To my opinion, the article is too long and complicated for such a simple subject. I think also that the classical prisoners dilemma itself, which i have been searching for, could have been mentioned in the introductory paragraph --- it is that short that does not need to be preceded by such a lengthy introduction. By the way, did the sentence "However, neither prisoner knows for sure what choice the other prisoner will make." was present in the original form? This condition is irrelevant to the prisoners, as in any case betraying will be the best strategy for each of them. It could be more relevant, however, to state that "each prisoner is sure that the other will not know his choice before making his own". There are many unnecessary repetitions and explanations of self-evident facts, which complicate the reading (i will not mention particular ones to keep at least my comment short). I think also that the philosophy of choice making should be separated from the subject matter of the article. Cokaban 22:25, 13 November 2007 (UTC)

It's not a simple subject at all. For example, the difference between optimal strategies in classical and iterated prisoner dilemmas is unintuitive, if not puzzling, to many people. Samohyl Jan 20:31, 13 November 2007 (UTC)
I do not believe that an encyclopedia article should set as a goal to explain all aspect of all possible generalizations of the topic. Also, i do not believe that somebody who is puzzled by differences between classical and iterated prisoners dilemma will profit from such a lengthy and confusing exposition, where lots of terms are used with, seemingly, a sole goal of making references to other articles in wikipedia. Unintuitive things cannot be made intuitive in an encyclopedia article: first, it is personal, second, there is no space for it. Ok, take an example. Here are two sentence i chose almost at random from the article:
  1. "In deciding what to do in strategic situations, it is normally important to predict what others will do."
  2. "So rational, self-interested play results in each prisoner being worse off than if they had stayed silent."
The first sentence is a generic fact, not loaded with much meaning in the particular context (no new information about prisoner's dilemma anyway). The second states in complicated terms the fact that has already been mentioned, which is mentioned several more times in various forms in the article, and which is self-evident once the rules of the game are clearly explained. I am interested in this dilemma, and in understanding the iterated version, and i like reading wikipedia, but i had not yet succeeded in making myself read entirely through the article. I'll wait until i have more spare time. I believe that the article requires cleaning. Cokaban 22:25, 13 November 2007 (UTC)
Both of Cokaban's examples are valid cases of redundancy, but this isn't a particular problem in this article more than typical articles with heavy traffic. Since many editors think that repetition is a way to explain the unintuitive it's a common problem; one that anyone can solve by a bold deletion or combination of repeated points. Ideally, every sentence should contain new information. For example, why is PD still called a 'dilemma' given that selfishness is an axiom, giving an obviously dominant strategy? What's the dilemma? --Wragge 12:42, 14 November 2007 (UTC)
Our articles are written to be accessible to the general public, not as dense technical expositions with unreadably high information-to-words ratio. Also, the dilemma is not mathematical, as there is a unique dominant strategy. The dilemma is intuitive. Loom91 19:08, 14 November 2007 (UTC)
I do not argue in favor of unreadability of articles, i just think that high words-per-information ratio is equally unreadable, especially for general public, and is unacceptable in an encyclopedia. There is nothing technical to explain, as the situation is quite clear. It is unacceptable that such an article would be harder to read than an introduction to a math paper on the subject. In any case, the pompous language in which it is written is not aimed at general public. Repeating the same thing many times in writing is not a way to explain it. --Cokaban 10:15, 15 November 2007 (UTC)
I agree with this interpretation. I think humans evolved to (mostly) cooperate, because it's the optimal solution for iterated PD, but applying this "experience" to classical PD leads to wrong conclusion. Human intuition simply expects that the PD (or any game, for that matter) will be repeated. Samohyl Jan 21:10, 14 November 2007 (UTC)
I'm still not sure what "Dilemma" the prisoner has, but a genuine dilemma for editors is whether to repeat points across different sections of the same article. Since I assume that the ideal reader will begin at the lead and read other sections until her interest runs out, I don't repeat points from the lead, but there's no style guideline against the opposite assuption (that the reader will focus on a single section, making repetition essential). For individual sentences, there are style guidelines, advocating simple, concise language. --Wragge 20:07, 14 November 2007 (UTC)
I think Tucker was using the term "dilemma" to refer to a non-trivial choice between two options. I'm not sure that he was making the claim that both options are equally bad. I think common usage loads the term dilemma with a strong connotation of "paradox" (the game does certainly have the flvor of a paradox, in the sense that payers get a higher payoff by both choosing one option (cooperate), but by rationally attempting to maximize payoff they wind up both choosing the other option and getting a lower payoff), but I think "Binary Choice by Prisoner" translates pretty closely to "Prisoner's Dilemma". Pete.Hurd 23:18, 14 November 2007 (UTC)