WikiProject Statistics (Rated B-class, Low-importance)

This article is within the scope of the WikiProject Statistics, a collaborative effort to improve the coverage of statistics on Wikipedia. If you would like to participate, please visit the project page or join the discussion.

B  This article has been rated as B-Class on the quality scale.
Low  This article has been rated as Low-importance on the importance scale.
WikiProject Mathematics (Rated B-class, Low-importance)
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating:
 B Class
 Low Importance
Field: Probability and statistics
 This article was nominated for deletion on January 8 2006. The result of the discussion was keep. An archived record of this discussion can be found here.

## Talk archive of Three cards problem

### False article?

Think. What are the odds that the other side is also black? So, the card what is just picked up, can't be white/white, which means you don't need white/white card at all to perform this "problem". Then probability that you just picked up white/black or black/black, is of course 50/50! I agree only one thing in this article; try it!!! I did try, probability was 50/50. 195.163.176.146 06:28, 14 December 2006 (UTC)

See if this helps. Let us assume that you have only the 2 cards as you say. If you draw one and do not look at either side, then there is a 50/50 chance it can be either. But there is more to this puzzle. Now I let you look one side of it. This is more information and changes your "bet". If you see white, there is certainty it is the white/black card. If you see black (as the problem states) you should still change your bet. This new information changes the odds. It is now 2/3 as this article explains - since you see a black side, 2 out of the three black sides are on the B/B card, so this card is more likely. From a common sense point of view, since you change your bet when you get to look and see white, you should also change your bet if you get to look and see black. More information changes the odds.Obina 13:12, 14 December 2006 (UTC)

Read the problem description carefully! "The side facing up is black. What are the odds that the other side is also black?" There is only one card that is black from other side too. From two cards (you don't have to count white/white as I explain earlier) probability to get B/B is 1/2. What I meant by "False article?" is, are this article trying to describe real paradox, or is it just attempt to terrorize wikipedia? We don't have to argue about this, just try it!← 212.16.102.117 16:29, 14 December 2006 (UTC)

I told people we should have retained the incorrect argument which the anons give above and explain why it's incorrect. Please check the earliest revisions of the article for that argument. As for the anon, all the arguments given in the article are correct, even if we don't explain why the argument proposed above is wrong. — Arthur Rubin | (talk) 17:06, 14 December 2006 (UTC)

How do you explain that the real world result are 50/50? PLEASE TRY IT! 212.16.102.117 18:33, 14 December 2006 (UTC)

Ok, I was wrong, I've done error, sorry to all. Result is 66,666...% = 2/3. And for future idiots, here is Vb code to simulate problem (just make form and command button to the right edge):

Private Type Card
Upface As Boolean
Downface As Boolean
End Type

Private Sub Command1_Click()
Dim ThreeCards(1 To 3) As Card
Dim Pullout As Long
Dim n As Long
Dim ii As Long
Dim BothBlack As Long

Randomize Timer

'First we define cards...
'Lets say that True means black.

'Card one, both sides are black.
ThreeCards(1).Upface = True
ThreeCards(1).Downface = True

'Card two, both sides are white.
ThreeCards(2).Upface = False
ThreeCards(2).Downface = False

'Card three have two positions, so it will be defined later.

For ii = 0 To 1000000
Pullout = Int(Rnd * 3) + 1

If Pullout = 3 Then
'Card three, upface could be either one, black or white.
If CInt(Rnd) = 1 Then
ThreeCards(3).Upface = True
ThreeCards(3).Downface = False
Else
ThreeCards(3).Upface = False
ThreeCards(3).Downface = True
End If
End If

If ThreeCards(Pullout).Upface = True Then
n = n + 1
If ThreeCards(Pullout).Downface = True Then
BothBlack = BothBlack + 1
End If
End If
Next ii

Print "Probability is " & Str((BothBlack / n) * 100) & "%"
End Sub



You can use this code for free as you like. 212.16.102.117 19:49, 14 December 2006 (UTC)

The key sentence in this analysis I think is

'Thus, there are only two possible cards, double green-faced or purple/green-faced, and each has an equally likely probability of being the one you chose.'

I think this is not true. The history of where the cards came from is key. How do we know there is an equal chance of those 2 cards?

When assessing the cards on the table, you must assess where they came from, the cards in the hat. Just like guessing whether a person on a team is a man or a women, it is important to know whether this is a rugby team or not.

There are 2 single colour cards in the hat, and only one dual colour card. The chance of a single colour cards will be 2/3, regardless of what colour you see.Obina 12:26, 11 January 2006 (UTC)

Agreed. - Haukur 12:28, 11 January 2006 (UTC)

Here is another way to consider the problem using the colour and side rather than the card. We see a green card so we know the double purple is gone. So there are only 2 cards in the running. How many permutations and combinations are there so we can assess the probability? Of these 2 cards, if we look at one side, we could at look at the top of card A, the bottom of card A, the top of B or the bottom of B. Of these, 3 are green and one is purple. But we look at see that it is not purple. So we could be looking at one of 3 sides. Top of A, Bottom of A, or Top of B. So there is a 2/3 chance we are looking at card A, and thus 2/3 chance that the bottom is green.

Anyone who still has doubts - this is a very easy experiment to try. Try it 20 times with, say, 3 business cards with P or G written on each side. Record how many times the colour under matches the top. Obina 18:20, 11 January 2006 (UTC)

The article cites a poll of 53 students, of which 35 guess 1/2 and only 3 guessed 2/3. That leaves 15 unaccounted for. I find it hard to believe that so many guessed neither the right answer nor the naive wrong answer. Does anybody have more detail on that? 171.64.71.123 04:54, 30 August 2006 (UTC)

As a former professor, I can answer this; it's easy. 35 students thought the question was a no-brainer. 3 students had heard this problem before, and 15 students didn't want to look like idiots, so they didn't raise their hands. Rklawton 15:23, 27 February 2007 (UTC)

### From Heraclesprogeny to Arthur Rubin

I think I fixed the article, leaving the naive 1/2 in, but showing the correct number is 2/3. Arthur Rubin | (talk) 00:04, 12 January 2006 (UTC)

• Improvement well done! Fine to leave and explain how 1/2 can seem like the quick answer, but even qual approach leads to 2/3 if probability is considered right. As said on talk page we must find a reference or this page will not survive as original research.Obina 00:13, 12 January 2006 (UTC)
I found that Martin Gardner has written about it under the name of "The three cards" [1]. This should give more leads. --C S (Talk) 08:49, 12 January 2006 (UTC)
Good find! Indeed it does give leads; see question and answer by John Schuyler. The pdf you found is a copy, probably illegal, of "Science Puzzlers", first published by Scholastic in 1960, and republished by Dover in 1981 as "Entertaining Science Experiments with Everyday Objects". Melchoir 09:08, 12 January 2006 (UTC)

### Approaches

I've taken out the sections on the "qualitative approach" because they basically make no sense. Melchoir 07:50, 12 January 2006 (UTC)

### Article name?

What is the puzzle called in the sources you're using? It'd be nice to have a name for the article which gets at least *some* Google hits :) - Haukur 12:24, 12 January 2006 (UTC)

Duh, actually reading the above shows that a move to a title like The Three Cards might be desirable since that seems to be Gardner's title. I'm not sure about the definite article or the capitalization, though, and maybe the name is ambiguous.
Stupid Gardnerian puzzlecruft ;) - Haukur 16:23, 12 January 2006 (UTC)
I like The Three Cards. Sure, it's ambiguous, but I can't see anybody typing in "The Three Cards" and being disappointed by the result. Of course, that's because I can't see anybody typing it in at all... Melchoir 16:39, 12 January 2006 (UTC)
How about The three cards (probability). No caps per Wiki norm - is not a proper name! And put a link to it on the Probability page.Obina 20:06, 12 January 2006 (UTC)
I contacted my statistics professor from CalTech. He couldn't find a name or a reference, but he used it in Ma 2 (sophomore math). (And he uses red/white cards, with an odd reference to three coins, with colors gold and silver.) Not helpful, I'm afraid, but I thought I'd report progress (or lack thereof). Arthur Rubin | (talk) 01:35, 13 January 2006 (UTC)
One more possible name The Three Cards problem to make it like [Monty Hall problem]]. I still prefer The three cards (probabilty). I'll change to one of these in a day or 2 and add a few links to here as mentioned.Obina 14:13, 14 January 2006 (UTC)
I'd still prefer Gardner's title but either of those will do too. We could also use something like Three cards problem or The three cards puzzle. - Haukur 14:29, 14 January 2006 (UTC)
The "odd reference" sounds like he is referencing Bertrand's Box Paradox, which should really be merged here. Bertrand's problem, though, involved six coins contained in three boxes, rather than three coins. -- Antaeus Feldspar 18:50, 2 February 2007 (UTC)
It has also been called the Three Card Swindle. This name is applied when the swindler tries to get the mark to bet on the outcome on the theory that the relevant probability is 1/2 rather than 1/3. But this name doesn't find many pages on Google. Jim 14159 03:44, 5 November 2007 (UTC)

### Colors

This is a trivial issue, but it's easiest to fix trivial issues. The article currently uses

• green/purple.

By listing green first I mean that green is the color shown after the draw. The external links are divided between

• black/red,
• red/white,
• black/white,
• red/white.

Schuyler and Gardner, listed on this talk page, use

• black/white,
• black/white.

I don't know if the apparent consensus on the ordering (black > red > white) tells us anything about psychology, but I'm going to do a find-and-replace to follow Gardner. Yes, I know I have too much time on my hands! Melchoir 16:25, 12 January 2006 (UTC)

### Expert tag

I think this article still needs a tag at the top indicating that it's not "done", whatever that means, so I'm adding Template:Expert. I think it's mainly "Formal approach" that needs logical cleanup, and we also need to add a correct, informal explanation. Melchoir 18:18, 14 January 2006 (UTC)

Why exactally do you feel an expert is needed? This is a very very simple probability question that prays on people ignoring the first part of the information so that they pick the 'obvious' answer of 1/2. I don't seen how an expert can provide more through an answer than the long winded one already shown. There are three cards, BB WW BW. If one card is showing B then its either the BB card one way up, the BB card the other way up or the BW card with B up. Thus 2/3 of those possible situations have B on the under side. Very very very simple, User_talk:Dacium|(talk)]] --Dacium 08:41, 13 February 2006 (UTC)
As I said before, the "Formal approach" section still needs work; historically, it's an outgrowth of an incorrect solution, and it should probably be scrapped entirely. As for the simple explanation, by all means, add it! The expert tag is not meant to discourage non-experts. Heck, I'm not an expert in probability myself, and it doesn't stop me! Melchoir 09:07, 13 February 2006 (UTC)
Eh, I did it myself. Melchoir 09:28, 13 February 2006 (UTC)

### This problem and Lewis Carroll's two coins problem

There is something puzzling for me in the way this problem is presented; it differs from how I've seen the same problem presented before, in a way that does not change the answer but does actually duplicate a second, related problem from Lewis Carroll.

Carroll's problem was as follows: You have a bag, and in this bag are one regular coin with a heads and a tails, and one double-headed coin. You shake the bag, reach in and pull out a coin and look at only one side. The side you look at is heads; what is the chance that the other side is also heads?

Obviously, this is the same puzzle, in all functional respects, as the three cards problem as currently described in the article: simply substitute "cards" for coins", "heads" for "black" and "tails" for "white". Which highlights the puzzling part about the current presentation of the problem: even though it's called the "three cards problem", one of those three cards is absolutely irrelevant. Because we specify that the card face we're looking at is black, the card that's white on both sides can never be the one we're looking at.

I suggest that we might describe the problem in what I believe is closer to its original form: instead of specifying that we see a card face that's black, and asking the chance that the other side is black, we simply ask what the chances are that the other side is the same color as the side we're looking at, whichever that is. The answer remains the same: the chances are 2/3 that we're looking at one of the faces of a card with the same color on both sides. (This form of the puzzle also has a literary reference that can be mentioned -- it was used in a Leslie Charteris story about Simon Templar.) -- Antaeus Feldspar 15:47, 1 March 2006 (UTC)

I agree with you on mathematical grounds, but I disagree with your proposal because in all the references I've seen, the problem is given as it currently appears here. I think it would be better to expand "Symmetry" into a top-level section and mention Simon Templar there. It would contain the alternate statement of the problem, as well as the story of the scammer, if you know what I'm talking about. Melchoir 21:11, 1 March 2006 (UTC)
I'm afraid I don't know the story; fill me in? As far as how the problem appears in the references, Schuyler credits Gardner as his source, but Schuyler specifies that the card face looked at is black, which Gardner does not. I can't check the Nickerson or Bar-Hiller and Falk references right now, but I do find it rather hard to believe that the majority of commentators discussing "the three cards problem" are actually using a form of it where the existence of the third card is absolutely irrelevant. -- Antaeus Feldspar 00:19, 2 March 2006 (UTC)
The 2 coin problem is similar - perhaps identical mathmatically by some solutions. I think, though, the third card adds to the common sense wrong answer. And considering pulling one card out of a bag of 3 helps illustrate the value of considering the sample from which a items is drawn. This is very important in population probabilities, used by drug developers, marketing execs, and polititians. A card could be either one colour or two. But if one pulls it from a bag with 2/3 of the card having one colour, there is a 2/3 chance the card is monochromatic. As expressed here the problem helps one move to the same problem, say, where there are 5 cards in a bag. If we are told one is all white, one is all black, and the other 3 are W/B, we can solve this directly.Obina 11:50, 4 March 2006 (UTC)
Sorry, I forgot to respond earlier. The scam story I was talking about goes something like this: I draw one of the three cards from a hat. No matter what it shows, I invite you to bet money that the other side is the opposite color. Of course, regardless of the color, if you bet you have a 2/3 chance of losing. Mathematically speaking, the underlying problem is exactly what you described in your original comment: what matters is the probability of getting a match, not a particular color. However, the scam (hypothetically) works because I have shown you a color, and I ask an intentionally misleading question about a color. I am hoping that you don't think about generalities, but you are confused by the situation at hand and assume the probability is 1/2. Melchoir 19:28, 4 March 2006 (UTC)

### And the rest?

In a survey of 53 Psychology freshmen taking an introductory probability course, 35 incorrectly responded 1/2; only 3 students correctly responded 2/3. What did the other 15 psychology students suggest? Anyone knows? Just curious. INic 02:40, 10 October 2006 (UTC)

### Minor clarification suggestion

Hi - It might be just me, but I feel that in the "Solutions/Intuition" section, where it says:

"Two of the 3 black faces belong to the same card. The chance of choosing one of those 2 faces is 2/3."

it would be clearer if it said something like:

"Two of the 3 black faces belong to the same card. Given that you have choosen a black face, the chance of choosing one of those 2 faces is 2/3."

Without that, I at least was still thinking in terms of the overall cards...

Comments? Gwynevans 12:43, 2 February 2007 (UTC)

### I think there is a logical error involved here somewhere (Or not)

Okay, if you're dealing with three cards, the probability of drawing a card with 2 of the same faces is 2/3. That much we know. But drawing from the three cards, you actually, without knowing any information about the card you've drawn, have a 1/3 or 33.33...% chance of drawing any single card. Now, if you look at one side of the card, you are left with 2 possible conclusions 1) The other side of your card has the same color as the side you see 2) The other side of your card has the another color

There are no thirds to consider, there's a 50% chance that the color will come up either way.

Looking at it as a matter of faces is an error, because that would mean you could draw each face randomly, and then you would divide the problem into 6ths, with a 2/5 chance of drawing another black face, and a 3/5 chance of drawing a white face. However, if we remove 2 white faces, IE eliminating the white/white card, we're left with a 2/3 chance to draw one of the two black faces, and a 1/3 chance to draw the white face, and that's where the error lies. Because we do not draw the other side of the card randomly, in essence, the way the problem is worded would give a 50% chance of either because the white/white card is extraneous. The card can be one of 2 options. Try doing the experiment without the white/white card, since that's what the wording in this problem suggests, and the trend will tend towards 50/50 unless you have a fluke.

Now, I do agree that the probability of drawing a card with the same color on either side is 2/3, or 66.666%, but if the problem was presented as shown in this article, then the kids who said 50% were correct. —The preceding unsigned comment was added by 68.227.203.149 (talk) 01:14, 14 February 2007 (UTC).

Added: I'm signing this because I just made an account.--Maveric Gamer 01:18, 14 February 2007 (UTC)

I'm afraid you're wrong. Just look at the symmetry argument. If the probability that you draw a card with the same color on both sides is 2/3, then that probability cannot magically become 1/2 if you specify the color you observe on the top. — Arthur Rubin | (talk) 01:45, 14 February 2007 (UTC)
Yes, but if there is a card in front of you that is black, there are only 2 cards it could be, thus there's a 50% chance that it is the black/black card, and a 50% chance it's the black/white card. Before the draw, there was a 1/3 chance that you would draw the black/black card, a 1/3 chance of drawing the black/white card, and a 1/3 chance of drawing the white/white card. After drawing a card that you don't see, that chance is still 33% either way, as there are 3 possibilities. After looking at the table with the card on it, and seeing the black card, one card gets eliminated as a possibility, so it can only be one of two cards. As point of fact, any information we can gain about this card will only serve to either make the odds 100% (if we were told that the sides are different) or 50% (both sides are the same). If we only know one side, then we eliminate one of three options, and since two options are left to us, it's still 50/50.
Even if we look at the sides. How many sides are white? 3. How many sides are black? 3. So what are the odds of drawing a card with the white side up? 50%. With the black side up? 50%. With the white side down? 50%. Black side down? Yep, still 50%.
Maveric Gamer 02:56, 14 February 2007 (UTC)

Okay, I think I get it now. Basically, what's happening, is that when there is a black side up, there are 3 possible scenarios 1) Side 1 of the B/B card is up 2) Side 2 of the B/B card is up 3) Side 3 (Black side of B/W) is up

In cases 1 and 2, the other side will also be black. I would delete everything I've said, but the discussion may help someone else comprehend this better. 68.227.203.149 07:06, 14 February 2007 (UTC) (Sorry about the IP, forgot to re-log in. Me again Maveric Gamer 15:29, 14 February 2007 (UTC))

I'm glad 68.227.203.149 got it. Here's the safe approach to this type of problem: List all outcomes of the fundamental experiment in a way where it's clear that there is symmetry, i.e. that they all have the same probability. E.g., flipping two identical dice simultaneously, you can only distinguish three outcomes (no heads, one head, or two heads), but to have symmetry you must list four outcomes (HH, HT, TH, and TT, where H=head and T=tail, and pretending you can distinguish HT from TH).
In our case, there are three cards of equal probability, and each card has two sides of equal probability, so even though the two sides of the B/B card are indistinguishable (and likewise the two sides of the W/W card), we list $3\times2=6$ outcomes of equal probability:
1. One side of B/B
2. Other side of B/B
3. One side of W/W
4. other side of W/W
5. B side of B/W
6. W side of B/W
These are all equally probable outcomes of the fundamental experiment, and as there are no other possibilities, each has an initial probability of 1/6.
Now, we are told we are watching a black side, which means we can discard possibilities 3, 4 and 6 in my list, leaving us with just the three possibilities listed by 68.227.203.149 above. There's no information making any of these possibilities more or less probable than the other two, so we still have symmetry, with a probability of 1/3 for each.
With this approach, try your hand at e.g. the Boy or Girl paradox (easy), or maybe even the Monty Hall problem (harder)!--Niels Ø (noe) 11:33, 14 February 2007 (UTC)

Ok, now I need some further help from you smart people. It seems that the above argument assumes that we are drawing sides, not cards. If we are drawing cards, there are three possibilities:

1. B/B
2. B/W
3. W/W

We draw a card, and find a black side showing. This eliminates W/W as a possibility. Only now the question is asked, "What is the chance that the other face is Black?". With only B/B and B/W as possibilities, is the probability not 1/2? It seems false to imply the "probability of seeing a black face" has anything to do with the problem, as the black face has already been seen, akin to saying that a coin flip has a 50% chance of being tails after the coin has already been flipped and tails is showing. Any help here will be appreciated.

Adl116 20:11, 14 September 2007 (UTC)

The problem is that once you have drawn a card, and can see the black side staring at you...by then, right away, you know that you are much more likely to have drawn the black card than the mixed card. For that reason, it is not equally likely to see the other side of the card white as it is black., because there is no "equal opportunity" for lack of better words. Rock8591 (talk) 07:25, 9 July 2009 (UTC)
The problem with your reasoning, Adl116, is that we have more information than "one of the sides is black", when we take the card out of the bag and place it on the table, we're labelling the sides on that card, we are indeed drawing sides, and we know that "this side is black".
If, on the other hand, someone else, which we couldn't see, pulled out the card, looked at both sides, and said "(at least) one of the sides is black", your reasoning would be correct, and the chances would be 50-50.
This is also an interesting point. If someone else pulls the card, and says "one of the sides is black", we need to know if said person has looked on both sides, (or at least know the probability that he did,) before we can accurately estimate the probability that both sides are black.
Bogfjellmo 21:02, 11 October 2007 (UTC)
Bogfjellmo Wrote:
If, on the other hand, someone else, which we couldn't see, pulled out the card, looked at both sides, and said "(at least) one of the sides is black", your reasoning would be correct, and the chances would be 50-50.
Why would this change anything? Just because we didn't pick the card?
Niels Ø (noe) Wrote:
In our case, there are three cards of equal probability, and each card has two sides of equal probability, so even though the two sides of the B/B card are indistinguishable (and likewise the two sides of the W/W card), we list $3\times2=6$ outcomes of equal probability:
#One side of B/B
#Other side of B/B
#One side of W/W
#other side of W/W
#B side of B/W
#W side of B/W
These are all equally probable outcomes of the fundamental experiment, and as there are no other possibilities, each has an initial probability of 1/6.
Now, we are told we are watching a black side, which means we can discard possibilities 3, 4 and 6 in my list, leaving us with just the three possibilities listed by 68.227.203.149 above. There's no information making any of these possibilities more or less probable than the other two, so we still have symmetry, with a probability of 1/3 for each.
Assuming the person drew a card at random and did not fix the draw in either favor, does the above formula change simply because we weren't the ones to draw the card? No. The person who draws the card should be irrelevant. The formula should hold true. —Preceding unsigned comment added by 142.177.153.177 (talk) 00:37, 25 February 2009 (UTC)
True, if the assistant looks at ONLY ONE side of the card he chooses and then places a black side up if possible, then yes, it changes nothing and the formula still holds true precisely as stated in the original problem.
However, if the assistant looks at BOTH sides and places a black side up if possible, it in fact DOES matter.
In the latter case only three possible scenarios each with 1/3 probability:
1) Assistant looks at both sides of B/B card and places one of the BLACK sides face up
2) Assistant looks at both sides of B/W card and places the BLACK side face up
3) Assistant looks at both sides of W/W card and rejects the card (Repeats process until seeing at least one BLACK side)
We throw out number 3 and we are left with 1 and 2 still equal probability:
1) B/B card with a BLACK side up
2) B/W card with the BLACK side up
So now it IS 50/50 that the other side is WHITE, but of course these are not the conditions stated in the article.Racerx11 (talk) 18:13, 28 August 2010 (UTC)

### Why three cards?

What does the white/white card have to do with anything? It's not clear to me why it's included in the problem at all. Historic reasons?

Actually, there's a nice explanation of the correct answer using the third card! After somebody answers you 50/50, just ask them what's the probability, if you take a random card and place it on the table, that the hidden color is the same as the shown color... With three cards, the answer is obviously 2/3. Now you can explain that the fact the shown color is black doesn't change the probability. Ratfox (talk) 22:04, 28 November 2007 (UTC)

I don't know anything about this problem other than what I've read on Wikipedia and the discussion pages. But obviously the Bertrand's Box Paradox is the exact same problem, so I think that these two articles should be consolidated. In addition, some Wikipedia users discussed deleting the Three cards article in January 2006 because they thought it was original research. Therefore, including Lewis Carroll (as discussed above) and Joseph Bertrand or anyone else in this article under something like a "History of the problem" section could be worthwhile for the article's credibility and interest. —Rafi Neal 04:22, 30 March 2007 (UTC)

Agreed, I'll merge the articles. Melchoir (talk) 00:47, 30 March 2008 (UTC)

### Ambiguity in the Problem

The question boils down to: Are we picking a black side that goes with a card, or are we picking a card that happens to be black on one side?

If we are picking a black side, then there are 3 possible black sides, 2 of which belong to the all-black card, rendering the probability of choosing the all-black card 2/3.

If we are picking a card that happens to be black on one side, then there are 2 possible cards that have at least one black side. Therefore, choosing the all-black card has 1/2 probability.

This problem is based on the ambiguity of whether we know that we are picking a black side or not, not on its counterintuitive nature. —Preceding unsigned comment added by Aznthird (talkcontribs) 02:54, 14 November 2007 (UTC)

It is tempting to think that the confusion is due to an ambiguous statement of the problem. But this is not the case. The current article text is
• "You put all of the cards in a hat, pull one out at random, and place it on a table. The side facing up is black."
Since we are observing only the side facing up, which turns out to be black, we have picked a black side as you put it. Picking a black card requires a much more elaborate setup, such as:
• You put all of the cards in a hat, pull one out at random, and blindly pass it to an assistant, with the instruction that he should place it on the table with a black side facing up, if possible. The assistant takes the card, looks at both sides, and places it on the table. The side facing up is black."
This is clearly not the problem under consideration. Melchoir (talk) 00:45, 30 March 2008 (UTC)

### Hmm

Ok first off: "What are the odds that the other side is also black?" I recommend learning the difference between odds and probability. Refer to wikipedia. The odds are 2

Now if your asking what is the probability that the other side is black that is 2/3rds. To those down below that think it is 1/2: the "trick" to this is labeling the faces 1 through 6. With 1,2 being the black card, and 3 being the black face of the black/white card. You have been told one of {1,2,3} is showing with equal probability. What is the probability that the face underneath is one of {1,2,3}. That probability is 2/3rds.

Probability: 1/3*1+1/3*1 +1/3*0 = 2/3

The odds are (2/3) / (1/3) = 2.

I would edit the thread but its just to darn much fun as is. Jeremiahrounds (talk) 21:59, 29 March 2008 (UTC)

The odds aren't "2" but "2 to 1". Since such odds aren't stated as simple numbers, there is no danger of confusion between the two formats, which are logically equivalent. And the difference between them is not as interesting as the paradox under discussion.
I like the labeling trick; fortunately it's already in the article. Melchoir (talk) 01:24, 30 March 2008 (UTC)

## Talk archive of Bertrand's box paradox

This article should probably be consolidated with Three cards problem, since they are exactly the same problem, only with cards substituted for boxes, sides and their colors substituted for drawers and their coins. -- Antaeus Feldspar 05:22, 10 January 2007 (UTC)

I agree. Also, there was some discussion on the Three cards problem's talk page about a similar Lewis Carroll problem. Also, apparently the three cards article was nominated for deletion in January 2006 on the grounds of it being original research—so adding Bertrand and Carroll to the article, perhaps in a "History of the problem" section, would give the article more credibility. I'll add this comment to the Three cards problem's talk page. —Rafi Neal 03:57, 30 March 2007 (UTC)
I came to the discussion page to make exactly this same suggestion. I tagged the article for merge. 00:43, 24 April 2007 (UTC)

## Merged Three cards problem into Bertrand's box paradox

I've just merged Three cards problem into Bertrand's box paradox. I think this is consistent with the comments in the above pre-merge talk page sections. Melchoir (talk) 01:17, 30 March 2008 (UTC)

## Clarity

I would like to change the sentence "However, this reasoning fails to exploit all of your information; you know not only that the card on the table has a black face, but also that one of its black faces is facing you." to "However, this reasoning fails to exploit all of your information; you know not only that the card on the table has at least one black face, but also that it has a black face which is facing you." I realize this may seem redundant, as the only way to know the first part of the statement is by knowing the second part. But anyway prefer this to the current sentence, which strongly suggests the card has two black faces. Comments? RomaC (talk) 23:02, 19 June 2009 (UTC)

I agree that the wording strongly suggested the card has two black faces. I was about to make your clarifying changes, but then I realized the whole sentence was dubious. So I have gotten rid of the "face which is facing you" bit altogether. More complete explanations are in the section below it. Open4D (talk) 20:41, 16 September 2009 (UTC)

## Minor Concern How the Problem Is Stated

I was introduced to this problem with the three coins in a bag version. However, the difference is that here in this article the problem is stated as, paraphrased, "If heads is drawn, what is the probability the opposite side is heads" compared to, "No matter what side is drawn, what is the probability that the opposite side is heads." Answer to the latter problem is 1/3. The idea being that the coin "game" is a carnival game where the odds/probability are stacked in favor of the house at 2/3. Apologies if I goofed something. Ij3n (talk) 00:31, 28 August 2009 (UTC)

## Removing the "100 silver coins" bit

I am about to remove this addition. Why? Let X represent 100 silver coins in a drawer. If "each silver coin is replaced by 100 silver coins", then the problem simply changes from GG/GS/SS to GG/GX/XX and is otherwise unchanged. I imagine this is not what the author intended; if so can I suggest a more detailed explanation? This would be better placed in new section titled something like "alternative explanations if you still don't get it", which could also have many of the discussions in these talk pages written up and added to it. (Although, if you believe in WP:NOTTEXTBOOK, then I suppose that section I have just described probably belongs in Wikibooks.) Open4D (talk) 10:10, 16 September 2009 (UTC)

Regarding original formulation with boxes and drawers (or simply coins), I would pay everybody's attention that standard arguments about chosing drawers (coins) rather than boxes are quite wrong, because coins are not physically mixed together, and in fact we chose a box at first and a coin at second (however, in case of cards this way of argueing is OK, since cards are indeed mixed together in one hat). I cite from the allegedly correct solution: "So it must from the G drawer of box GS, or either drawer of box GG. The three remaining possibilities are equally likely (why?) so the probability the drawer is from box is GG is 2⁄3."

Consider, however, following three versions of the problem with the same protocol (randomly chosen box, then randomly chosen coin). It is supposed that there is a mechanism to dispense coins, so that you can not feel by fingers there are many of them in a box.

Version 1,  GG box contains 100 gold coins, others like before
Version 2,  GS box contains 50 gold and 50 silver coins, others like before
Version 3 (my favorite) GG box contains only 1 gold coin, others like before


The only correct way to solve the problem (Bertrand) is by treating the boxes as the individual cases, and by summing the probabilities that the cases would produce. This works for all these three versions literally in the same way as it does for the original one, and results in the same answer, 2/3, thus clearly indicating that usual speculations about absolute number of gold coins are quite irrelevant.

If we consider a general case, GG box with X gold coins, GS box with Ng gold and Ns silver coins, and SS box with Y silver coins, then absolute probabilities are

p(GG box and random gold coin)=1/3


(gold coin is extracted from GG box with certainty),

p(GS box and random gold coin)=(1/3)*[ Ng / (Ng + Ns)]


(gold coin is extracted from GS box with probability Ng / (Ng + Ns), and conditional probability that randomly chosen golden coin originates from GG box is

p(GG box | random gold coin) = 1/[1 + Ng/( Ng + Ns)] = ( Ng + Ns)/( 2*Ng + Ns)]


It depends only on ratio Ng/Ns of gold and silver coin numbers in GS box. The posterior odds GG vs GS box are (Ng + Ns) : Ng and have nothing to do with number X of gold coins in GG box. If Ng = Ns, these odds are 2 : 1, and it is mere coincidence that X : Ng gives the same 2 : 1 for X=2 and Ng =1 in the original formulation.

Argument that the chosen box has coins of the same type 2⁄3 of the time remains in force for any numbers of coins. But the next statement "So regardless of what kind of coin is in the chosen drawer, the box has two coins of that type 2⁄3 of the time" works only if Ng = Ns, and is not that trivial. (Michaeldsp (talk) 04:21, 26 July 2011 (UTC))

I must warn you that it doesn't matter what you think the "right" solution is. What matters for a wikipedia page is what is in literature. Second, Bertrand didn't write his paradox as a puzzle where the reader is supposed to figure out the right answer (and it really shouldn't share a page with the three card puzzle for this reason). He pointed out that it is technically incorrect (as you discuss) to merely count the cases, no matter whether you use the three boxes or the six coins as your cases. Either approach is technically invalid. That is what the wikipedia page on this subject should say.

I do not count cases as you try to implicate , I calculate probabilities of various outcomes. I treat boxes as equiprobable starting cases (each separates farther into subcases) which is absolutely valid (randomly chosen box, as formulated) and I do not at all count boxes as outcomes. On the other hand, to count coins as starting cases is incorrect, and this is what my examples were about. If you have 100 golden coins in GG box and 1 G coin in GS box, you do not choose equally from 101 G coins. So, your statement that 'Either approach is technically invalid' is quite meaningless. The same refers to your statement that 'How things get "mixed together," as you put it, is completely irrelevant'. On the contrary, it is exactly what matters. The fact that golden coins are physically separated by boxes prevents you from chosing from the whole pool of 101 G coins and invalidates the 'coin solution'.

Next, looking at history of revisions, I have realized that it is your addition that counting cases can give the correct answer 'when the probability of getting a gold coin is either 0 or 1 in every case you do consider'. This sound like a puzzle to me. What probability do you keep in mind? Aposteriori probability is 0 or 1 for any possible case, while apriori one is never (generally) 1, though may by 0.

At last, I do not find construction 'if people like you keep trying to treat ...' an appropriate style, the more so that I have not even mentioned word puzzle in my original comment. Would you, please, be less personal and more specific in essence?. (Michaeldsp (talk) 19:57, 2 August 2011 (UTC))

Discussion moved to a more appropriate place, Michaeldsp's talk page. JeffJor (talk) 20:45, 3 August 2011 (UTC)

## Poor organisation

The article currently explains the "box" problem, and then restarts from scratch with the "card" problem, explaining much of the same probability stuff all over again, with different wording and some extra bits thrown in. Everything that is generic should be explained once only, under whichever variant is covered first (which presumably should be the box, since it is called the "box paradox"). The "card" explanation then just needs to demonstrate how it is exactly the same problem (which is pretty obvious really). 109.151.57.10 (talk) 18:40, 4 December 2011 (UTC)

## Wording

I made a minor change in the informal description to reflect more specifically the probability distinction for the casual reader who may still assume even probability distribution. Flying Hamster (talk) 17:13, 4 March 2012 (UTC)

Well, it already said as much; and the change you made is not quite logically consistent since it states the conclusion - that the two remaining possibilities are not equally likely - as a reason for the conclusion. But I'm going to try to clean it up a little to address your intended point. JeffJor (talk) 21:29, 5 March 2012 (UTC)

## Typo in "The paradox as stated by Bertrand"

The last sentence of the above section reads "It can be resolved only by recognizing how observing a gold coin changes the probability that the box is GS or GG, but not GG." Now I'm not an expert here, but it seems to me this is a typo and should in fact read "It can be resolved only by recognizing how observing a gold coin changes the probability that the box is GS or GG, but not SS". ???? Starfiend (talk) 16:38, 10 April 2012 (UTC)

Then I guess I worded it poorly, since you misunderstood. No probability actually changes; but the combination of a specific observation with each event in {GG, GS, SS} is different than the probability of just each event itself. For example, if "OG" is the event "observe a gold coin," the probability is 1/3 for each of the events {GG, GS, SS}. But the probability is 1/3, 1/6, and 0, respectively, for {GG&OG, GS&OG, SS&OG}. This is the change I meant, and GG is the one that doesn't change in combination with OG. Then, the conditional probability is found by normalizing these probabilities with the factor 1/2, the sum of those probabilities. So P(GG|OG)=2/3, P(GS|OG)=1/3, and P(SS|OG)=0. Let me think about a way to word it better. JeffJor (talk) 21:16, 10 April 2012 (UTC)

## Similar, not equivalent

The lead of the article stated that Bertand's box paradox is logically equivalent is with the more famous Three Prisoners problem (Martin Gardner) and the Monty Hall Problem (Steve Selvin, Marilyn vos Savant). I changed the words "logically equivalent" to "similar" because I do not believe the original statement is correct. Certainly there are strong similarities but I do not see that the problems are mathematically equivalent (I understand what that means); I don't really know what logically equivalent means.

Monty Hall and the Three Prisoners certainly are more than similar, though whether or not they are mathematically equivalent depends on how one chooses to mathematize the problems. Not everyone makes the same translation from a verbal puzzle to a formal problem in probability theory. But with the most popular mathematization of the two problems, they are indeed mathematically identical. I don't see a natural translation of the three boxes into the three doors. The host opens door 2 or door 3 and reveals a goat. The player is interested in which of the three situations holds: door 2 and door 3 contain goat and goat, or goat and car or car and goat. When the host opens door 3, situation 2 (goat and car) is ruled out. What is left is goat and goat, or car and goat. In the former situation he is only 50% sure to open door 3, while in the latter situation he is 100% sure.

So we have a mathematical similarity in that the prior odds are 1:1, then comes some information which has a likelihood ratio of 1:2, hence the posterior odds are 1:2 (Bayes rule).

But to say that the two problems are logically equivalent should entail, in my opinion, finding a one-to-one correspondence between all components of the two problems. I think it can't be done. In the three boxes problem you might see either a silver or a gold coin. In the three doors problem you might see door 2 opened or you might see door 3 opened. The mechanisms which lead to these possibilities are quite different. In the three boxes problem it is pure chance. The sample space has six elements of equal probability (three boxes, two coins per box). In the Monty Hall problem the action of the host is (partly) forced. The sample space has four elements of unequal probability. (Car is behind door 1 host opens door 2; car is behind door 1 host opens door 3; car is behind door 2; car is behind door 3). By splitting outcomes in one description, or merging them in the other, one can finally make the two problems correspond in a one-to-one fashion.

Maybe I'm wrong but in any case the statement needs a literature reference to support it. Richard Gill (talk) 07:42, 19 April 2013 (UTC)

Two probability problems are mathematically equivalent if they can (not must, because as you noted probability problems can usually be solved by different approaches) be solved by the same mathematical formulation. If C1, C2, and C3 are the three a priori cases, and I is the information you have, each of these three problems can be solved by P(C1|I) = P(I|C1)*P(C1)/[P(I|C1)*P(C1)+ P(I|C2)*P(C2)+ P(I|C3)*P(C3)] = [(1)*(1/3)]/[(1)*(1/3)+(1/2)*(1/3)+(0)*(1/3)]=2/3 and the equivalent statements for C2 and C3. That makes them mathematically equivalent, regardless of alternate solutions (which will, btw, also have equivalents across all three). Most references that mention both state this equivalence. And they can't be mathematically equivalent unless there is some logical equivalence somewhere.
No, I don’t know any reference that details the logical one. But it is there. They are logically equivalent because you can symbolically describe each in the same way, although it isn't the most intuitive representation for any of them that makes it so. Part of that is because in the MHP, there is no symmetry between the two obvious kinds of results, car and goat, like there is between the gold and silver coins. So the equivalence isn't equating prizes to kinds of coins, or even what's behind doors to in boxes. And in the BPP, there is no knowledgeable entity that forces the information to go a certain way.
In fact, the BPP contains more information than is needed, and that seems to get in the way of seeing the logical equivalence. Let the first box contain any number of gold coins, only. The second contains any number of silver, and the third contains equal numbers of each. Said another way, there are two parallel properties G and S. One box has property G only, one has property S only, and one has Both. A box is chosen at random, and somehow a single property is revealed; say, G. The probability it is the G-only box is 2/3 based on the formula I gave above. Similarly in the MHP, your door must have at least one, but can have both, of two properties: the door to its right (imagine they are laid out in a circle) has a goat, or the door to its left does. Parallel properties, and the door can be R-only, L-only, and Both. Monty Hall reveals one of these properties to you.
JeffJor (talk) 21:44, 21 April 2013 (UTC)
Nice construction (the doors in a circle). Here is another equivalence question. Vos Savant explained Monty Hall by translating it to what I like to call the three cups problem: the host hides a coin underneath one of three upside-down cups on a table. The cups are indistinguishable to you, but distinguishable to the host. The host shuffles the cups. You pick one. He shows you that one of the other two is empty and offers you the choice to switch to the third cup. There is no conditional probability at all involved in solving this problem: the third cup hides the coin if and only if your initial choice doesn't, which has chance 2/3. The natural probability theory description of the three cups problem has an outcome space of just two outcomes with probabilities 1/3 and 2/3. For many people the three cups and the three doors problem are obviously the same, but most probabilists would only call them similar: the three doors problem can be reduced to the three cups problem by invoking symmetry. But you can't "reduce" the three cups problem to the three doors problem. The relationship between the two problems is asymmetric.
I am not aware of any source which discusses these "sort of equivalences" in a careful and systematic way. Richard Gill (talk) 07:59, 9 May 2013 (UTC)
It is not whether the cups, when viewed in isolation, are distinguishable that matters. It is whether the cups, in the situation described, are distinguished. Since one is revealed, and one is not, they are always distinguished to both you AND the host, so the point you are trying to make is moot. Both problems are the same, whether or not you can make a distinction in isolation. You can in the situation.
Like Morgan, et al, claim, the correct solution is always conditional. (Or alternately, based on a unconditional solution that includes what they call the condition in the "initial" setup. By that I mean the host could have randomly chosen, ahead of time, to reveal the higher numbered, or lower numbered, unchosen goat door. That doesn't mean different doors in all cases.) The probability is always a function of how the host chooses a door/cup to reveal. But Morgan, et al, are wrong in claiming that, based on the knowledge gleaned from the problem statement, you can assume a bias for the host. I'm not saying you must claim no bias, but any bias you can assume is equally likely to favor either door/cup, and that means the probability the host chooses either one is the same. JeffJor (talk) 22:55, 9 May 2013 (UTC)
We need a more subtle (in fact: relative) notion of indistinguishability. Of course when there are three cups on a table you can distinguish them by their locations. The point of the three cups story is that the player can't recognise which cup is which after they have been shuffled, while the host (clearly) can. No doubt the philosophers have some technical terms connected to the theory of naming things. Richard Gill (talk) 11:25, 10 May 2013 (UTC)
I think you missed my point. The solution to your Three Cups Problem requires that each cup receive a label. The "indistinguishable problem" you want to describe is really the "combined doors" one. The two unchosen doors are pushed together, and a goat is led from behind them in such a way that you can't tell which door it came from. You can't do it with empty cups, as you describe; one needs to have a coin, and the other two need peas. Two are combined, and a pea is revealed. And the point is that the answer to the indistinguishable problem can only be (for the two remaining containers) 1/3 and 2/3. The answer to the "distinguishable problem" is X/(1+X) and 1/(1+X); but unless you are told what X is, you can only assume X=1/2.
The point that makes it an "indistinguishable problem" is that you can't associate the information with a single "container," not that you can't distinguish the containers. Push Door #2 and Door #3 together, and you have an "indistinguishable problem." The same applies to the Two Child Problem (which is quite literally the Box Paradox, with the fourth box containing different coins). If you "know that at least one is a boy" because you asked "are any of the puppies male?" (note how I improved the question by not saying "one"), then there is a 1/3 chance both are boys because the trait has not been associated with a specific offspring. If you "know that at least one is a boy" because you meet father and son walking in the street, you have associated the information with a specific child, even though no means of "distinguishing" the children is evident. The answer is 1/(1+2X), where X is the probability a man will choose to walk with his son instead of his daughter. And if you simply "know" the information? Then Bertrand's solution applies, but with four cases. The answer must be the same as "what is the probability the two children in a family share the same gender?" That is, 1/2. JeffJor (talk) 12:14, 10 May 2013 (UTC)

## Bayes rule, symmetry arguments

I added the Bayes' rule version of the solution by Bayes theorem, and filled in the formal details of a solution by symmetry. I also corrected the wording and the notation in the first solution (refering to the original silver and gold coins version of the problem) . The probabilities which are defined there are various conditional probabilities and the solution is actually the Bayes' rule solution. Richard Gill (talk) 12:10, 19 April 2013 (UTC)

## A simple solution

There are two cases that the other coin is a gold coin, and one case that it is a silver coin.--Albtal (talk) 10:13, 17 May 2013 (UTC)

The problem with this formulation is that many will assume that a "case" corresponds to a box, not a coin. They'll say "There's one case where the other one is silver, and one case where the other one is gold." So the key additional step is "After choosing a random box, you next choose a random coin." Or something like that. 23.30.218.182 (talk) 15:31, 23 May 2013 (UTC)
Surely in my simple view of the problem (which is equivalent to yours) "case" corresponds to a coin. There are three gold coins. For two of them, say G1 and G2 (which are in the same box), the other coin in the same box is a gold coin: For G1 this is G2, and for G2, this is G1. Only for one of the three gold coins, G3, the other coin in the same box is a silver coin. And after choosing a box at random and withdrawing one coin at random, if that happens to be a gold coin I got G1, G2 or G3. It seems for me that the key for the whole understanding is to see the fact For G1 this is G2, and for G2, this is G1.--Albtal (talk) 11:51, 26 May 2013 (UTC)
What probably causes the most confusion in this sort of probability problem, is what "case" should mean. Without realizing it, most people want it to mean "something that the Principle of Indifference applies to." That way, they can assign equal probabilities to each "case." And the controversy comes because both "boxes" and "coins" fulfill this definition, so both sides feel they have applied it properly even if it was done only intuitively. But the information given about the selections only applies the version where the cases are coins, since all we learn about is a coin. JeffJor (talk) 13:16, 27 May 2013 (UTC)
Of course in searching the best solution we have to take account of both sides of Einstein's advice: Make things as simple as possible, but not simpler.--Albtal (talk) 10:57, 30 May 2013 (UTC)
The simple solution which I proposed also occured before on this discussion page and in the article itself. But as it has been somewhat hidden within other considerations I have, maybe superfluously, opened this section.
But as we have this section now, here some other simple arguments:
Just as in the well formulated Monty Hall Problem (MHP) (The contestant determines two doors of which the host has to open one with a goat) the first step in BBP is to eliminate one of the two boxes the player did not choose to open a drawer: If he got a gold coin the SS box is eliminated, the GG box otherwise. So he will win the game (now that means having a gold coin in the other drawer of his box) in two of three cases, namely if he didn't choose the mixed box.
The great advantage of BBP as opposed to MHP is that step 0 disappears completely: that is to find the statement of task which has really the claimed 2/3 solution.
There is another nice correspondence between BBP and MHP: In BBP we know at the beginning that the chance to get the mixed box is 1/3. And (not new here) we know with certainty that we'll get a gold or a silver coin. If we now had a chance to have the mixed box other than 1/3 (maybe 1/2) we would have known this at the beginning, which is not possible.
In the well formulated MHP we know at the beginning of the game that we have "chosen" the car with probability 1/3; and we know that the host will open another door with a goat. Here again another chance than 1/3 for the chosen door is not possible if we assume that there is no difference whether the host opens door 2 or door 3. This last something strange consideration also is not necessary with BBP: We don't need at all the help of a host.--Albtal (talk) 12:36, 30 May 2013 (UTC)

In his Answer to the "The Three Cards" Teaser Problem above mentioned John R. Schuyler presents a similar problem claiming a 2/3 solution: Suppose a cook has two pancakes. He says that one pancake is brown on both sides, and the other is brown on one side and golden on the other. He places one pancake on your plate, and it is brown on the side up. What is the probability that the other side is also brown? I think here step 0 is the most important: Tell the smallest extension of the problem which has really a 2/3 solution!. But completing the joke we could also add the cook saying: You'll win if you guess the color of the other side. And we could place the story on a game show ...--Albtal (talk) 10:59, 4 June 2013 (UTC)