Jump to content

Talk:Monty Hall problem/Archive 3: Difference between revisions

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Content deleted Content added
Line 563: Line 563:
::::::To begin with, I believe you are referring to a typo as my math, (which strangely was already fixed before your post...) and what was supposed to be there is a /2. Anotherwords if you can't tell which goat is which, then you average the success chance of switching from each goat. ...tbc
::::::To begin with, I believe you are referring to a typo as my math, (which strangely was already fixed before your post...) and what was supposed to be there is a /2. Anotherwords if you can't tell which goat is which, then you average the success chance of switching from each goat. ...tbc


:::::::Actually, I was referring to what you posted [http://en.wikipedia.org/w/index.php?title=Talk:Monty_Hall_problem&diff=40602168&oldid=40593321 here], where your "math" was only incorrect because you were choosing the completely wrong method for assessing the total chance of winning by switching. I did notice that later you "corrected" your "typo" to "(2/5 + 1/4)/2 = .65", which meant that you were getting not only the mathematics but even the basic arithmetic wrong -- or didn't you notice that dividing (2/5 + 1/4) by 2 made it .325 and not .65? But as previously explained, ''neither'' of these is correct. I think the fact that you are quibbling over us failing to follow your changing of your argument to make it ''more wrong'' is an indication of your emotional desire not to be wrong and not a competent judgement of the situation. -- [[User:Antaeus Feldspar|Antaeus Feldspar]] 15:33, 22 February 2006 (UTC)
:::::::Actually, I was referring to what you posted [http://en.wikipedia.org/w/index.php?title=Talk:Monty_Hall_problem&diff=40602168&oldid=40593321 here], where your "math" was only incorrect because you were choosing the completely wrong method for assessing the total chance of winning by switching. I did notice that later you "corrected" your "typo" to "(2/5 + 1/4) = .65", which meant that you were getting not only the mathematics but even the basic arithmetic wrong -- or didn't you notice that (2/5 + 1/4) made it .325 and not .65? But as previously explained, ''neither'' of these is correct. I think the fact that you are quibbling over us failing to follow your changing of your argument to make it ''more wrong'' is an indication of your emotional desire not to be wrong and not a competent judgement of the situation. -- [[User:Antaeus Feldspar|Antaeus Feldspar]] 15:33, 22 February 2006 (UTC)


:::::::: So what you rooted through the pages edit history to find a typo that had already been fixed in order to claim it was someone's argument? By the way, you might want to recheck your snotty little failed attempt at correcting my math... (2/5 + 1/4) = 13/20 = 6.5/10 = .65
:::::::: So what you rooted through the pages edit history to find a typo that had already been fixed in order to claim it was someone's argument? By the way, you might want to recheck your snotty little failed attempt at correcting my math... (2/5 + 1/4) = 13/20 = 6.5/10 = .65

:::::::::There is nothing wrong with going through edit history in order to see how things have changed; that's what edit history is there for. It is not wrong, '''unlike''' [http://en.wikipedia.org/w/index.php?title=Talk%3AMonty_Hall_problem&diff=40722310&oldid=40721476 changing other people's words] in order to make them appear wrong, which is vandalism. And as has already been explained, ''both'' versions of your calculation were wrong; I worked with your original calculation rather than your "corrected" calculation because when you "corrected" it by adding a "/2", which you did [http://en.wikipedia.org/w/index.php?title=Talk:Monty_Hall_problem&diff=prev&oldid=40605034 here], you actually made it incorrect on the level of basic arithmetic. I was addressing your more subtle error: the idea that you could get the total probability of the player seeing a goat and then switching for the win by directly adding the chance of winning by switching after Goat #1 has been seen to the chance of winning by switching after Goat #2 has been seen. Whether you add them directly ''or'' you divide the total of their chances by two, it gives you the incorrect answer. What gives the ''correct'' answer is to take the chance of winning by switching after seeing Goat #1, ''multiply it by the chance of seeing Goat #1'', and add it to the chance of winning by switching after seeing Goat #2, similarly multiplied by the chance of seeing Goat #2. Now ''stop vandalizing this page''; neither editing someone else's words to make it appear they said things they didn't nor removing discussions because you realized you said things that were incorrect is acceptable Wikipedia editing. -- [[User:Antaeus Feldspar|Antaeus Feldspar]] 17:07, 22 February 2006 (UTC)


::Unfortunately, articles like this tend to attract mathematicians whose motivation seems to be to make any mathematical subject appear ludicrously complicated, presumably in an effort to boost their own egos. Have you seen the article for [[Birthday paradox]]? I mean, that's one that is pathetically easy to explain, but the amount of sheer mathwank in that article is downright obscene. --[[User:Bonalaw|Bonalaw]] 11:02, 14 February 2006 (UTC)
::Unfortunately, articles like this tend to attract mathematicians whose motivation seems to be to make any mathematical subject appear ludicrously complicated, presumably in an effort to boost their own egos. Have you seen the article for [[Birthday paradox]]? I mean, that's one that is pathetically easy to explain, but the amount of sheer mathwank in that article is downright obscene. --[[User:Bonalaw|Bonalaw]] 11:02, 14 February 2006 (UTC)

Revision as of 14:27, 27 February 2006

Template:Featured article is only for Wikipedia:Featured articles. Template:Mainpage date

I've moved the existing talk page to Talk:Monty Hall problem/Archive2, so the edit history is now with the archive page. I've copied back a few recent threads. Older discussions are in Talk:Monty Hall problem/Archive1. Hope this helps, Wile E. Heresiarch 15:28, 28 July 2005 (UTC)

Actual rules for the gameshow

Analysis of the errors in the intuitive answer

(as opposed to the correctness of the mathematical answer)

It is not enough to describe why the mathematically derived solution is correct. To resolve the paradox to the satisfaction of all, one must also describe why the intuitive solution is wrong. I think this requires three steps.

1) an understanding that the "intuitive solution" is just a dismissive term for a first analysis that turned out to have a flaw

2) a description of the logical steps that were employed in coming to the intuitive solution

3) an analysis of that logic, to find its flaw(s)

(1) I not argue the point, but rather just hope that people agree with it.

(2) I think that the intuitive solution takes the following steps:

  • transform the game into a simpler yet completely isomorphic game
  • calculate (or intuit) the probabilities for that game
  • extrapolate the answer back to the original Monty Hall game

Here is the game that I believe is used intuitively (I’ll call it the Silly game):

  • There are three doors, A, B, and C.
  • Door A is open and has a goat behind it.
  • Doors B and C are closed, and there is a goat behind one and a car behind the other.
  • The contestant just guessed that door B has the car behind it, and is now being given a chance to change his mind.

Should he change it or not?

Clearly the answer to this is that the chance is 50% either way, and it does not matter whether the contestant changes his mind.

Now compare the Silly game to an original Monty Hall game at a point half way through, in which the contestant has guessed door B and Monty has opened door A.

  • There are three doors, A, B, and C
  • Door A is open and has a goat behind it.
  • Doors B and C are closed and there is a goat behind one and a car behind the other.
  • The contestant just guessed that door B has the car behind it, and is now being given a chance to change his mind

It seems that the Silly game is exactly the same as the original Monty Hall game at this point. To a viewer who has just ‘tuned in’ and does not know what has previously happened, the games look identical. Therefore, logic would dictate that the answer to the original question is that it doesn't matter whether the contestant changes their mind; the probability is 50% either way.

(3) It turns out, of course because the two games are not completely isomorphic. There is a crucial piece of information missing from the Silly game that is needed to make it isomorphic with the Monty Hall game, as follows: Monty (who knows where the car is)

  • was required to open a door to reveal a goat,
  • was given an opportunity to open door C,
  • chose not to open door C.

Monty has thus said something concrete about door C (“I didn’t choose it, perhaps because I couldn’t choose it”), but nothing about door B. This is the source of the asymmetry between doors B and C, and the reason that door C is more likely to not have a goat behind it. Happyharris 20:57, 25 July 2005 (UTC)

Perhaps, what we need in the main project page is an explanation to "why the intuitive answer is NOT the Monty Hall problem". Something like: Extrapolating the probabilities of an isomorphic game back to the Monty Hall game is the cause of much of the controversy of the game, since answering the probabilities of the game with randomized strategy (1/2) is NOT answering the probabilities of each strategy of switching (2/3) and not switching (1/3). aCute 08:50, 27 July 2005 (UTC)
I believe I've given a start at the top of the Aids to understanding section. Most people I've seen argue the incorrect position assume you can forget past events and look at it as a fifty-fifty chance (as they can with, say, coin flipping). The more-tenacious ones cannot be persuaded from their little optimization, when you ignore that they are using it. I plant the seed of doubt by showing that their premise fails in a case, card counting, in which they will almost undoubtedly acknowledge its failure.
Their premises always trump yours in their reasoning. If you don't try to correct improper premises, no argument you make will matter. This topic inspires debates to no end on Usenet because people ignore that. They end up, effectively, arguing over definitions, which is the prime example of a useless and stupid debate.
Logically, every single one of the article's diagrams can be redrawn and all the article's explanations can be rewritten while simply ignoring the past. Logically, any isomorphism you use can be discounted as an incorrect choice because it contradicts their assumptions, so you must be using hocus pocus. — 131.230.133.185 04:52, 10 August 2005 (UTC)

Breaking it down into steps

For the people who still find it hard to see why the probability is not 50/50 when there are two doors left, I hope the following illustration may help. I'm going to show that the Monty Hall problem is a specific case of a more general game; I'll call all the games that differ in their parameters "Hall games" for ease of use.

The basic idea behind a Hall game is this: We start with one large set of secret-hiding items -- they can be doors to be opened, or cards to be turned over, it doesn't actually matter. What matters is that there are n cards, but only one of them is the Prize card. The rest are Null cards.

Step One: The cards are divided into two hands. Each hand must have at least one card. For simplicity's sake, we call the number of cards in the first and second hands h1 and h2.

Step Two: Someone who can see which cards are Nulls can discard some number of Nulls from one hand, the other, or both. We call the number of cards remaining in each hand after the discarding of Nulls r1 and r2 (like h1 and h2, they must be at least one.)

Step Three: The player makes a guess at which of the two hands contains the Prize.

Step Four: The player makes a guess at which card out of the hand he selected is the Prize.

It's clear that to win the game, the player has to make correct guesses in both Step Three and Step Four. What are the chances of picking the correct hand? The first hand is correct h1/n of the time; the second hand is correct h2/n of the time. If we want, we could enumerate the cases: if h1=2 and h2=5, then the Prize could be the first card of the first hand, the second card of the first hand, the first card of the second hand ... et cetera, et cetera.

Now this is the part that many people find counter-intuitive. If we enumerate the cases, and then we reduce Nulls to meet any legal value of r1 and r2, we find that in no case can the removal of Nulls switch the Prize from one hand to the other. This means that the chance of the card being in the first hand or the second always stays at h1/n and h2/n -- even if the sizes of the hands do not stay at h1 and h2. If it's dealt to that hand, it stays in that hand; therefore the chance of it being in a particular hand is always equal to the chance that it was dealt to that hand.

What about the removal of Nulls? Does it affect anything after all? Yes, it does -- it affects the player's chances in Step Four. The chance that the Prize is in a particular hand is dependent upon h1 and h2 -- how many cards each hand started with. But the chance of finding the Prize in a hand (assuming it's the right hand) is based on how many cards that hand contains after Nulls have been removed -- since there's only one Prize, the chance of finding it in a hand of r1 cards is 1/r1.

Now, what are the chances of making both guesses correctly? If no Nulls get removed from either hand, then the chances of picking the Prize are either h1/n x 1/h1 (since r1 is equal to h1 when no Nulls have been removed) or by similar logic h2/n x 1/h2, which also multiplies out to 1/n. If, however, one of the hands -- say, hand 1 -- has been reduced down to one card (r1 = 1), then the chances of finding the Prize in that hand is h1/n x 1/1 -- if you've correctly guessed that the Prize is in that hand, you have a 100% chance of finding it in that hand when it's the only card in the hand.

With this being the general structure of the Hall game, we can see that the Monty Hall problem is really just the case where n=3, h1=1 and h2=2, r1=1 and r2=1. The chance that the Prize is in the player's hand is h1/n -- 1/3. The chance that it's in Monty's hand is 2/3. Before one Null is removed from Monty's hand, the chance of finding the Prize in his hand is 2/3 x 1/2 -- i.e., 1/3. But when the Null is removed, the chance is now 2/3 x 1/1 -- i.e., 2/3! -- Antaeus Feldspar 03:44, 26 July 2005 (UTC)

three prisoners problem

kudos to those contributors to this article. i have been thinking about this for days (despite the fact that i "got it" ... after about 20 minutes or so). i find the "two sets" explanation the most clear, though i'm sure some people will love the bayes' theorem explanation. i also found the talk page very entertaining. some contributors are clearly manifesting what kahneman and tversky (writers on cognitive biases - the former won a nobel prize) refer to as "belief perseverance." geez, sit down with a friend and three cards for 10 minutes and you'd realize you're wrong. anyway... i thought you might find this page interesting:

http://econwpa.wustl.edu:8089/eps/exp/papers/9906/9906001.html

besides elaborating on many of the key assumptions (often unstated) underlying the MHP, the paper presents an interesting (and supposedly earlier) problem of the same form, the "three prisoners problem." here's the upshot...

There are three prisoners, you and Prisoners A and B. Two of you are to be executed, while one will be pardoned. The prison warden knows who will be executed and who will be pardoned (like monty must know where the goats and car are). According to policy, the warden is NOT allowed to tell any prisoner if he/she is to be pardoned. You point out to the warden that if he tells you if A or B will be executed, he will not be violating any rule. The warden says C is to be executed. What is the chance that you will be pardoned?

the answer presented, like that to the the MHP, is that your chances of being pardoned remain at 1/3, while the the chances of B being pardoned, GIVEN WHAT THE WARDEN HAS TOLD YOU, is 2/3.

I found looking at this problem reinforced how *$&%^@# hard it is to get one's head around the MHP, because despite having read about the latter for days, i still had to think about the three prisoners problem for several minutes to get my head around IT, despite knowing that it was essentially the same as the MHP. K-razy.

someone mentioned the idea of blocking edits to featured articles for some period of time. while apparently there is a policy against such things, i agree with the suggestion, at least where articles that are sources of dispute. the featuring of an article doubtless attracts many potential editors, some of who may simply not know enough about the subject to edit appropriately. for example, the first time i read the monty hall page (when it was featured), someone had altered the solution section to express the misguided (given certain assumptions, which too often are unstated) 50/50 approach. needless to say, i was confused by the fact that all other sections of the article contradicted the solution.

good job. Contributed by 24.89.202.141.

The Mueser and Granberg paper is already one of the references, and the three prisoner's problem is the Gardner problem referred to in both the "The problem" section and the "Origins" section (although Grardner's version is not described). I suspect the point of allowing featured articles to be edited is to reinforce the notion that wikipedia is really, no kidding, editable by anyone. I find this to be a very principled, and quite admirable, stance even though it does require some effort (and I suspect protecting the main page was only done with the greatest reluctance). I cannot take credit for much more than nominating the article as a WP:FAC, but thanks for the compliment. -- Rick Block (talk) 01:01, July 27, 2005 (UTC)

Game Theory

I copied this back in from the archive:

I would argue it's a problem in probability. Rich Farmbrough 12:44, 23 July 2005 (UTC)
I have switch the statement back because it is certainly a problem of probability. Apparently MathWorld categorizes the problem under game theory, but in my opinion the connection is tenuous. In the standard statement of the problem there are no "conflicting interests" because the host is not an active player in the sense that he cannot make choices that affect the outcome. Because the problem clearly could generalize into a game theoretical topic, I would have no objections to a Wiki categorization as such. In the text, however, this would require some explanation, so in my opinion it isn't appropriate for the first line. Certainly it does not supersede probability. Davilla 16:21, 23 July 2005 (UTC)

Hecatonchires originally changed 'probability' into 'game theory', and then did it again two times. I've changed it back and posted a notice on his/her Talk page, pointing him/her to this discussion. I've invited him/her to start a discussion here if s/he wants it changed to 'game theory' again. Phaunt 10:54, 10 August 2005 (UTC)

I was initially quite dubious about this edit, but became less so when I looked up game theory and realized that it does rather fit the Monty Hall problem after all, at least by the definition in the article. How does a player maximize his chances of walking off with the prize? Of course, this may mean that it's game theory that has to be tweaked to clarify the matter. -- Antaeus Feldspar 23:08, 10 August 2005 (UTC)
Game theory is used to analyze strategic situations, where "strategic" means that there are interacting interests between two players. For instance, Robert Gibbon's book on game theory starts with this sentence: "Game theory is the study of multiperson decision problems." This is echoed (somewhat less clearly) in our article: "A definitive feature of game theory that distinguishes it from decision theory whose main subject is also studying formalized incentive structures is that game theory encompasses decisions that are made in an environment or states of the world in which strategic interaction between various players occurs." (Actually this sentence is really confusing, I will fix it.) The monty hall problem is clearly not a multiperson decision problem, since there is only one interested actor, the player. I think probability theory or utility theory would be best descriptions of the problem. --best, kevin ···Kzollman | Talk··· 23:55, August 10, 2005 (UTC)
Beter still, decision theory?--best, kevin ···Kzollman | Talk··· 23:59, August 10, 2005 (UTC)

Ahem, don't we have an encyclopedia to write here. It seem that you are arguing about the distance between two points on a beach ... jeesh. Well, if it is really important for one of you to prevail here then fine, go at it. Every once in a while however, look around and notice what you are spending your valuable time doing.  ;-( hydnjo talk 02:15, 11 August 2005 (UTC)

I can't believe this, they're still at it! Jeesh. hydnjo talk 02:31, 16 August 2005 (UTC)

Back and forth, the edits revert again and again. Can we get some consensus on this? It's two words... I don't mind the current compromise of mentioning both terms in the same sentence, but would prefer only mentioning probability. But is there any way we can get people to stop changing it every week or so? Fieari 05:22, August 26, 2005 (UTC)

I, for one, think this qualifies as one of the lamest edit wars ever.
If this is an edit war, it seems to be a rather slow one. Note that the last two reverts were on 10 and 15 August. The 23 August revert had to do with Increasing the number of doors, see below.
Anyway, I don't really have a problem with either formulation, if there's a majority. My problem is just that the 'game theorists' refuse to discuss this here on the talk page, even after having been explicity invited. That was the reason for the last 8/10 and 8/15 reverts.
I haven't voiced my own opinion yet; I like 'decision theory'. The problem with 'game theory' is that there is only one player. Phaunt 00:44, 27 August 2005 (UTC)

My own understanding is that within the context of game theory, "Monty Hall Problem" refers to some modification of the problem from the one currently given in the introduction to this article. Specifically, some range of behaviors is permitted to the host. In this case, each of the two players chooses a strategy from within their ranges of possible behaviors, and the task is to identify a Nash equilibrium. The Mueser and Granberg paper describes this approach to the question. --Wmarkham 21:05, 5 September 2005 (UTC)

Reverted addition under Increasing the number of doors

I just reverted the additions by User:62.99.223.20 under Increasing the number of doors, because they were copied verbatim from [1]. This page was linked to, but that doesn't change the copyvio. Also, it didn't really belong there (under aids to understanding) but rather under Variants -> n doors, where a shorter discussion of this variant already exists. I invite User:62.99.223.20 to expand on this if he feels it's too short (but without violating copyright, of course).

Congratulations

It's a bit late, but I want to congratulate and thank everyone who worked on this and helped it become a featured article. Good job! Phaunt 11:53, 27 August 2005 (UTC)

"the assumptions explicitly stated below"

The intro refers to "the assumptions explicitly stated below" that do not appear (to me) to exist in the article.

From the introduction to the article:

In this puzzle a player is shown three closed doors; behind one is a car, and behind each of the other two is a goat. The player is allowed to open one door, and will win whatever is behind the door. However, after the player selects a door but before opening it, the game host (who knows what's behind the doors) must open another door, revealing a goat. The host then must offer the player an option to switch to the other closed door. Does switching improve the player's chance of winning the car? With the assumptions explicitly stated below, the answer is yes — switching results in the chances of winning the car improving from 1/3 to 2/3.

The description of the puzzle appears to be quite clear on the constrants on the host's behavior, and I believe that any mention of assumtions is unnecessary here. I happen to be someone who does quibble over the statement of the problem, and I find this one to be quite clear, so this is probably not problematic.

Unfortunately, there are references to "the assumptions" throughout the article. Further unfortunately, my own, possibly nonstandard, position is that the "Monty Hall problem" is really a related family of problems. The ones in Selvin's letter and the Parade article each pose slightly different problems than the one stated above. The stated result (or any clear result) can only be obtained in those cases if additional assumptions are made. In my opinion, understanding the nature of some of the confusion surrounding the Monty Hall problem is made easier if the existence of these (nontrivial, IMO) assumptions is made clearer.

Since the article is already quite good, and my editions could be construed as having an agenda, my hope is that one of the perennial maintainers of the article is willing to adjust it in order to reflect my concerns, in a manner that is true to what the article describes. My guess is that the only changes needed are to move the existing comments about assumptions from the "Anecdotes" section to the introduction of the Parade article, and to eliminate the reference to assumptions from the intro. In fact, after consideration, I think it is safe for me to make the latter change myself. My observation is that these words currently serve no purpose one way or another, and hopefully I am a good representative of the view that the existence of assumptions is important. I will do this shortly.

--Wmarkham 19:44, 5 September 2005 (UTC)

The words referred to the "Mueser and Granberg" constraints in the next section, although I agree the current problem statement in the lead-in is unambiguous without this forward reference. As far as I can tell, the problem itself is generally viewed to be the specific one described in the lead-in and not the family of problems framed by any ambiguous statements of it. The Parade controversy, in particular, was not primarily due to the missing assumptions but the apparent inconsistency between there being two doors to choose but the choice not being 50/50. Marilyn vos Savant has said (in the 1991 NY Times article) that virtually no one complained that the assumptions were not clear and that she thought the constraints on the host others have added as explicit assumptions were implicit in the statement of the problem published in Parade magazine. The constraints on the host are described in "The solution". Perhaps this section and "Anecdotes" could be strengthened with slightly more discussion about the ambiguities in the Parade magazine problem statement, referencing the 1991 NY Times article. -- Rick Block (talk) 21:40, September 5, 2005 (UTC)

Why I think it's wrong...

When you're given the choice to switch or not, it still doesn't matter, because switching or not switching is a binary decision. To give the probability of 1/3 to the scenario where you decide not to switch doesn't make any sense; when the host gives you the option, the scenario and therefore the rules change. You're no longer choosing between three doors, you're choosing between two.

It'd be different though, if you were not given the choice to switch. When one door is revealed to not be winning, you couldn't say "now my chances are 50/50" though from an intuitive standpoint you could say that, because without any ability to act on the events that occur, the probability really doesn't change from the standpoint of the start (though if we were weather forecasters we would say the chances were now 50%, but this isn't a forecast that continually updates, gamblers want to know what their overall chances are from the start, because that's where they're stuck making their decisions)

The thing here is when someone decides to base chance on when one assesses the situation or base the chance on when a choice was made. Gamblers will want to base it on when they made their choice, but weather forecasters will want to assess the situation continuously.

If you have read the entire article then you missed the point. It's a scam, "the game host (who knows what's behind the doors) must open another door, revealing a goat". If you believe otherwise let me explain one more time (I'm the scammer and you're the scamee or mark). To make the example more obvious we are going to start with ten doors. You choose one of them and by default you un-choose nine of them. Would you agree that your chances at this point are one in ten or 10%? OK then, I then slowly and dramatically -ta dahh- open eight doors . It is critical at this point that you understand that I know where the prize is so the eight doors that I (the scammer) open are known to me to be non-winners (goats). So, we're down to you with a 10% door and me with a 90% door. Do you still think it's a 50/50 choice? So, bring it back to three doors and so long as you realize that the host (scammer) knows which door is a non-winner which he procedes to open then you should obviously choose his remaining door whether you start with three or ten or one hundred doors. --hydnjo talk 21:33, 6 November 2005 (UTC)
Also note that it doesn't matter if you think it's wrong. It's objectively right. Probability is nothing but a measure of the fraction of attempts that produce a given outcome. Empirical studies have demonstrated that the answer given in this article is right. No amount of argument or logic can change the fact that this article's conclusion matches reality. --P3d0 22:10, 6 November 2005 (UTC)
It does matter to me. I'd like to think that we have explained this in a way that 69.246.138.166 comes away with an understanding rather than a dogmatic. But then... --hydnjo talk 00:36, 7 November 2005 (UTC)
That's a noble goal, but with all due respect, I wasn't talking to you.  :-) --P3d0 14:58, 7 November 2005 (UTC)
I think I see where you're getting confused, so I hope you'll let me try to explain. Let's generalize the problem as given to a new class of problems:
  1. The host presents X doors, one of which has the prize behind it, and the others of which are "misses".
  2. The player divides these doors into two sets, each of which must have at least one door in it.
  3. The host can then adjust either or both sets by removing "miss" doors or by adding new miss doors (the player cannot distinguish just-added doors from the doors that were originally there.) There must still be one door in each set and the host can only add or remove misses, not the door with the prize.
  4. Challenge: The player must correctly guess which of the two sets contains the door with the prize.
  5. Challenge: The player (assuming they picked the correct set) must correctly guess which door in the set contains the prize.
Now, we can quickly confirm that the Monty Hall problem is just a specific case of the general problem. The host presents three doors (step 1); the player divides them into a one-door set and a two-door set (step 2); the host then removes a miss door from the two-door set (step 3).
As for the challenges, let's look at the second (step 5) before the first (step 4). The effect of the host removing a miss door in step 3 is to eliminate any actual "challenge" from the challenge of step 5; you can't make a right choice or a wrong choice if you have no choice to make! This means that the odds for the first challenge, step 4, become the odds for the whole problem.
So what are the odds for step 4? Well, in step 2, the player divided the door into two sets with no idea of which one was the prize door. There's a 1/3 chance that the prize door ended up in the one-door set, and a 2/3 chance that the prize door ended up in the two-door set. Now, 69.246.138.166's challenge to the correctness of the stated solution is that "when the host gives you the option, the scenario and therefore the rules change." My question in response is: "How?" The host can remove a miss door from a set; in our expanded general problem, he can remove multiple miss doors or add multiple miss doors to either set. But nothing he can do can change which set the prize door is in; therefore the odds of which set the prize door can be found in must be exactly the same in step 4 as they were after step 2. If you disagree, reply and spell out exactly how the prize door could change from one set to the other in step 3.
So let's recap. In the official Monty Hall problem, the player divides the doors into two sets; the set with one door has a 1/3 chance of containing the prize door; the set with two doors has a 2/3 chance. The host removes a miss door from the two-door set, and with it he removes the chance that the player could pass the first challenge and fail the second. Even though both sets are now down to one door each, the "two-door set" still has a 2/3 chance of containing the prize door. To pick the correct set is to pick the correct door, so picking the door that was in the two-door set gives you a 2/3 chance of winning. Again, if you disagree, don't just assert that the situation does change; explain how it could have changed. -- Antaeus Feldspar 20:06, 7 November 2005 (UTC)

Easy peasy

Jesus Christ, why are people so thick. It's like this..

  1. Choose a door; you're lucky it's a goat! but you had a 2/3 chance of choosing a goat so the odds were on your side.
  2. Monty reveals the other goat.
  3. You switch; it has to be the car as the other goat is gone.
  4. You win!
  5. If you had chosen not to switch you would have lost.
  6. By not switching you're stuck with that 2/3 chance of getting a goat. Switching meant that you turned it into a 2/3 chance of winning the car.
  7. Easy peasy. End of story. Nighty Night Jooler 02:09, 8 November 2005 (UTC)
But if we let the proles know how simple this is, they might start thinking for themselves, and then where will we be? Bonalaw 14:31, 22 November 2005 (UTC)
OK, how's this for a paraphrase: we do away with the numbers and use the layman's term "chances are." You pick a door. Chances are, it's a goat. Then, Monty opens a door that he knows is a goat door. Now, assuming you picked a goat, and of course Monty showed you a goat, the only door left is the car. You'd be a fool not to switch. Of course, it's much less likely that you'd pick the car at first, in which case you'd lose by switching. Dyfsunctional 17:44, 14 December 2005 (UTC)

I think this reasoning is too simplistic, and would give the wrong answer in some cases. For instance, would this reasoning apply to #paragraph_about_Who_Wants_to_be_a_Millionaire? I think it would lead to the wrong answer. (The right answer is that your odds are 50-50 in that case.) --P3d0 21:31, 18 December 2005 (UTC)

Actual rules for the gameshow

I wonder if somewhere in the article it should be pointed out that under the actual rules for the “Lets Make a Deal” game show that this problem seems to be named after, switching doors didn’t actually increase your chances of winning. On the game show Monty would only offer the chance to switch ½ of the time if the player initially picked incorrectly, but would always offer the choice if the player initially picked correctly. This throws off the normal analysis in which the choice is always offered, since simply being offered the choice to switch increases the chances that your initial door pick was correct. The preceding unsigned comment was added by 128.227.7.193 (talk • contribs) .

  1. How do you know this to be true?
  2. New stuff goes at the bottom of this page.
  3. As this is your first edit from your IP I'll wait a day or so before moving it down so as to help you find this response. --hydnjo talk 19:46, 17 November 2005 (UTC)

Comment moved from article

  • Note- i'm not the original writer and i may be wrong, but it seems to me like the probability is actually 50/50. it's 2/3 for a person to lose. BUT say you're player 2. if player 1 is eliminated, he had the goat. which means that player 2 either got the car, or it's left. Which means there's an equal chance for him to win or lose by switching... right? -- 24.196.238.213
    • This question relates to the variant where there are two players, one is eliminated, and the question is "should you switch". Assuming you're not eliminated the answer is no. and the probability is 2/3 that you have the car. There are indeed two outcomes left, i.e. you have the car or switching gets it, but they have unequal probabilities in this variant - just like the two outcomes in the original problem have unequal probabilities. The key is the realization that N possible outcomes doesn't mean each one must have a 1/N chance. To make this one more obvious, consider a similar game with 10 doors, 1 car, and 9 contestants. If none of the 9 choose the car, 8 are eliminated randomly. If any of the 9 choose the car the other 8 are eliminated. This game ends up in the same situation, a player and a door to potentially switch to, with the same two possible outcomes with what I hope are clearly not even probabilities. In the 3-door, 2-person case, I assume you agree the unchosen door has a 1/3 chance of having the car at the beginning of the game. Eliminating one of the players doesn't change this, but since the car is either behind the unchosen door (still 1/3 chance) or one of the players has it, when there's only one player left the probability is (1 - 1/3) = 2/3. -- Rick Block (talk) 14:47, 23 November 2005 (UTC)

paragraph about Who Wants to be a Millionaire

I deleted the following paragraph about the Who Wants to be a Millionaire show:

The game show "Who wants to be a millionaire" has the same problem. The player is given 4 possible answers to a question. You can ask the host to remove 2 wrong answers, leaving you with 2 answers, one right and one wrong. Assuming you have no idea which of the 4 is right you can guess one (Lets say A), remove two and be left with two . If A is not removed then there is a 1/4th chance that A is right and a 3/4th chance that the other one is right.

On the millionaire show, the contestant does not get to pick an answer and then have two wrong answers removed. Without picking, two are removed and if you then pick randomly there's a 50/50 chance. Even if you mentally (randomly) "pick" and your pick is one of the two remaining answers, the result is a 50/50 chance because your pick is not related to the process by which the other answers are removed. -- Rick Block (talk) 16:28, 16 December 2005 (UTC)

Agreed. No amount of meditation before the removal of two choices will affect the probability of the final outcomes. You need to tell Monty your pick and have that affect his actions. --P3d0 21:29, 18 December 2005 (UTC)

Note that, (In the English version at least), the player can tell the presenter which of the 4 answers they think it is before the 50:50 takes one away, but it is still possible that the one they picked could be taken away. Leaving two, and a true 50:50 chance of picking at random the correct answer. --JP Godfrey 21:10, 23 January 2006 (UTC)

this problem is easier than stated

Maybe I missed it ,but it seems noone has brought up the claim that this is a pseudo antiintuative paradox.. The Wierdness of the probablilty is ONLY a result when a convergance (to that specicifed probablilty) is reached over an infinte number of cases. Per this particular singal case ,when you are in a REAL game switching the door doen't chagne AT ALL YOUR specific case chance of winning. "there ar e3 kinds of lies in this world : lies ,damn lies and statistics" The Procrastinator 14:14, 30 December 2005 (UTC)

In a real game that followed the rules set down in the problem, yes, switching the door does change your chance of winning. Picture the following situation: instead of doors, you and Monty have cards, and instead of three cards, you have ten cards, one of which is the Ace. Monty shuffles the cards, lets you pick one, and keeps the other nine in his hand. What are the chances that the card is in Monty's hand? Obviously, nine to one. Now, Monty discards eight non-Ace cards from his hand. Can the Ace possibly change from one hand to the other during this step? Clearly not, so the chance that the Ace is in Monty's hand is still nine to one, even though the actual size of Monty's hand is now one. -- Antaeus Feldspar 15:34, 30 December 2005 (UTC)

Let's revert to the fundamentals of probability

To determine the probability of an outcome you list all the possible outcomes and then count the number of times the outcome you are interested in turns up. Look at the decision tree under the Venn diagrams in the main entry. How many possible outcomes are there under each of the possible contestant choices? How many of these outcomes result in the contestant winning the car?

50% in all cases!

Why?

Because the problem is mis-stated. When the contestant has chosen Goat 1 the quizmaster reveals Goat 2 - he doesn't have a choice. When the contestant has chosen Goat 2 the quizmaster reveals Goat 1 - he doesn't have a choice. When the contestant has chosen the car the quizmaster has to choose whether to reveal Goat 1 or Goat 2. These are independent possibilities and should not be selectively aggregated for the purposes of determining probabilities.

If you dispute this and believe that revealing Goat 1 or Goat 2 are aspects of the same event because the quizmaster only reveals one of them then you don't understand how probability works. The quizmaster also has to make a decision if the contestant picks the car and this decision must be included in the calculation of probabilities, as the decision tree correctly shows. The numbers allocated to the various decisions are meaningless, it is the counts that count.

Wherever you find a paradox, there's a fallacy lurking. (Hodgson's law - you read it here first) 80.47.80.51 01:45, 18 January 2006 (UTC) Graham Hodgson, 17 January 2006

Sorry, nope. They are not independent possibilities. There is a one-in-three chance that the contestant initially picks the car. The host can only choose between revealing Goat 1 or revealing Goat 2 when this one-in-three chance has already happened, and he must choose one of the two; therefore the total probability of these two chances must be exactly one-in-three. No more, no less. If we were to display the probabilities visually on a pie chart, we would see the total size of the user-picks-the-car slice stay exactly the same size as it was divided into the (not-significant) possibilities of "host reveals Goat 1" and "host reveals Goat 2". Are you seriously suggesting that that slice of the pie must actually get bigger because it's being divided into a larger number of slices? -- Antaeus Feldspar 02:18, 18 January 2006 (UTC)
The purpose of this page is to demonstrate that there's a sucker born every minute. The (number of ways to explain the problem logically and correctly) divided by the (number of dissenting opinions) will always be less than one. All you do by trying yet one more logically correct perspective in hopes of persuading only one more true believer is to perpetuate the ranks of non-believers by more than one. Thus the numerator will grow more slowly than the denominator and the ratio will continue to be less than one, (believers ÷ nonbelievers < 1), always. ;-) hydnjo talk 05:03, 18 January 2006 (UTC)
Of course - the chances of making a wrong initial choice are twice as great as the chances of making a correct initial choice, and therefore the chances of improving on the initial choice are also twice as great. I withdraw covered in confusion. 213.78.64.39 12:03, 18 January 2006 (UTC)Graham Hodgson
This has nothing to do with true-believers and non-believers, it's not a philosophical or moral dilemma, it's a simple mathematical problem whose solution is counterintuitive. Some understand it, some don't. Tailpig 19:37, 18 January 2006 (UTC)
I was using the terms believers and non-believers metaphorically for those that do understand and those that don't.  :-) hydnjo talk 19:48, 25 January 2006 (UTC)

Töff's analysis

This is my own plain-language explanation & debunk of "switching increases your odds." (It's essentially the Markov Chain). I hope Tailpig will read it and become a "believer." :) [töff's analysis]

Well, first off, let me say that I've never really liked that diagram; I know how the problem goes and what the 'trick' of it is and I find it incredibly difficult to see how that diagram relates to it.
With that said, however, your analysis is fundamentally flawed. Let me quote from your analysis: "Let's say you choose Goat1. The host shows Goat2. At that point, you have two paths: switch(Y) or not(N) ... and you have equal 50-50 chances to take either path." (emphasis in original) Well, that's the source of your confusion right there, because that is in no way the original problem. The essence of the problem is that the player chooses his strategy, whether to switch or stay -- he does not have it randomly selected for him with 50-50 probability! It's no wonder that your calculations show the player's chances as 50-50; if the player has one of two strategies randomly assigned to him, that will make his overall chances 50-50 no matter what the probability is for a given strategy.
If you doubt this point, let me illustrate. I will roll six ten-sided dice in a row. If and only if all six of them come up "10" will I put the prize in Box A; otherwise I'll put the prize in Box B. Elementary analysis should confirm that the strategy of picking Box B will pay off 999,999 times out of 1,000,000. If you are allowed to choose your strategy, you can win 999,999 times out of 1,000,000; if, however, I then randomly pick a 'strategy' and therefore a box for you, with equal probability of either, you now win the prize only 1 in 2 times.
Now that you know that the problem you've been dealing with isn't the actual Monty Hall problem, please let us know if you have any problem seeing why the answer of the actual Monty Hall problem is that switching gives you a 2/3 chance of winning. -- Antaeus Feldspar 00:08, 26 January 2006 (UTC)

Monty Hall is a Markov Chain

Well, I may be thick, but I certainly don't get it. After the door has been opened, I am left with two doors. The story so far has been very entertaining, but in fact it has given me no information which will indicate if my first choice was right or not.

However, if you were wrong, you now know which other door would have been right! And that is new information. -Scarblac 20060223

That means it is now 1:2. How we got to this situation is irrelevant, unless the process of getting there gives me information which is relevant, but I don't see how it does.
The article correctly says that for some statistical calculations the past can be ignored, while for others it cannot. The major fault of the article is that it does not go on to say why in this case the past is relevant. The article cites card counting as an example where the past cannot be ignored - if I know that some cards have already gone (and I know WHICH) then I have information which affects the probability of the next card being an ace, so the past is relevant for future probability. But if we have a sequence of events which are separate events, then we have a Markov chain, and previous events in the chain do not affect the next one: for example a series of coin tosses.
The point about Markov is that there is a difference in perspective before and after any event in the sequence. If the chances of tossing a coin and getting heads is 1:2, obviously the chances of doing it twice is 1:4, but if I toss a coin and get heads, the chances of now doing it a second time are 1:2, because the perspective after the first toss no longer takes the past probability into account. This applies to all sequences of probability, unless there is a CAUSAL link between the past event and the current probability (e.g. the last toss dented the coin so that it now falls differently). The Monty hall problem looks to me like a Markov chain. If I am wrong, then the article has to show that. It does not address this problem, and I seriously dout any of you can.
Think about this: I have three cards, and I pick two of them and put them on the table in front of you. I tell you to pick one, and if you pick the higher of the two cards, you win. We all agree that your chances are 1:2. The fact that I have three cards doesn't affect the choice I gave you, and your chances would be the same if I had four cards or only the two. The fact that there WAS another door has no bearing on the chances of getting the car NOW.
(BTW, the article says that "hundreds of maths professors" have attested that the proabability is 1:2. Is that not something you 1:3 proponents should be worried about - this smugness is incredibly arrogant! If the article is right, it needs to give serious maths authorities as sources, not internet sites.) --Doric Loon 11:24, 24 January 2006 (UTC)

You say the problem looks to you like a Markov chain. I assure you it is not, for reasons already explained in the article. The CAUSAL link is the constraint that the host MUST open a door, CANNOT open the door you've picked, and the opened door MUST NOT show the car (i.e. the host is NOT opening a random door). Your initial pick is a random event, but the host's action is not. The Bayes' theorem section is effectively a proof of the result explained in numerious other ways in the article. The references section already cites "serious maths authorities, not internet sites" (as you request). I assume you've read the article and the previous discussions on this talk page, and you still think the probability is 50/50. If you seriously want to understand I suggest you either print this article and take it to your maths teacher to discuss, or I can try to help here. -- Rick Block (talk) 15:09, 24 January 2006 (UTC)

Oh sure, I stand by the "assume good faith" principle and wouldn't be here if I didn't really want to understand it. The point is, though, that what Markov proved is that the probability of a future action depends entirely on the present situation and not on how we got here. How we got here is only relevant if it alters the present situation; the probabilities involved in getting here are not in themselves relevant for the probability of the next event. Now, I understand that the show host has no choice. What you haven't explained to me is what I learn from that which makes my next choice (switch or no switch) into an informed choice rather than an arbitrary (i.e. 50:50) one.
I'm not a mathematician, so of course I know I could easily be stumbling in the dark. But I do wonder if you (and the article) are not confusing two different things. Remember Markov and the coin: The chances of tossing a coin heads up twice are 1:4, but after I have tossed it heads up once, the chances of doing it a second time are 1:2. Now is it not possible that here too there are two different phases with different probabilities:

  1. the game is about to begin, I have to choose between the three doors, and know that I will later get the chance to switch. Are my chances better if I plan to switch? Yes.
  2. we are in the middle of the game, the host has opened a goat-door, and I now have the chance to switch. Are my chances better if I switch? No.

In other words, in terms of game theory, if we do it many times, I can optimise my chances by having a switch policy, but in the particular case, when I stand before two doors, it is 50:50. As with Markov's coin tossing, this seems intuitively wrong, but makes sense mathematically. I think. If that is true, then it explains why there are two strongly held views. They are answering different questions. In that case, though, the top of the article needs to rephrase the problem. --Doric Loon 16:11, 24 January 2006 (UTC)

Doric, the Monty Hall problem is the probability equivalent of an optical illusion: the situation is carefully chosen to make the mind jump to false conclusions based on the interpretive shortcuts that speed up everyday processing.
In this case, the situation has tricked you into thinking that there's more than one random event. There isn't; the only random event is the player choosing one door out of the three. (Technically, you can argue that when the player picks the car, there's another random event because Monty has to choose which one of two doors both containing goats should be opened. However, since there is no distinction between the goats, this is not a significant random event; either way, the result is exactly the same, that Monty winds up with one remaining door which has a goat behind it.)
Now, let's look at that random event, of the player choosing one door out of the three. The effect of this choice is to divide the doors into two sets, the player's set of one door and Monty's set of two doors. The car is either in the player's set, or it's in Monty's set; it should be fairly easy to see that the chances are only one in three that the car is in the player's set.
Next comes the other part that most often tricks people: Monty opens a door from his set which he knows to contain a goat. This reduces the size of his set from two down to one; this often fools people into thinking that because the two sets are now the same size, they must have the same probability of containing the goat. However, this is not the case: the car cannot move from one set to the other during this step, so obviously the chance of the car being in Monty's set must still be two in three, as it was when the only random event of the problem happened.
(Note: some people get fooled for a different reason when contemplating this step, especially if the problem is phrased incorrectly or ambiguously. Some people think that there is a chance under the rules of the game for Monty to open a door and reveal the car, and that when the problem says "Monty opens the door to reveal a goat", it means that we are to eliminate those cases as not having happened. However, the correctly stated problem makes it clear that Monty knows which doors contain goats, and will always choose a door which contains a goat.)
So, in summary: the player makes a guess, which he has a one-in-three chance of getting right, of where the car is. Monty then reduces the size of his set to one, so that choosing the right set is equivalent to choosing the right door. If the player guessed right the first time, staying wins; if the player guessed wrongly the first time, switching wins. The probabilities are still determined by that first and only random event, the one-in-three chance of the player getting the car in his set on the first try. -- Antaeus Feldspar 17:18, 24 January 2006 (UTC)

First of all, congratulations, that's the clearest presentation of your argument I have heard, and better than what is in the article. In particular, what you say about viewing both parts as one event is helpful. I am now comfortable with the 1:3 solution, provided we are talking about the probability of the whole. In my last comment I accepted that for the sake of argument, but now I accept it without difficulty. Standing at the beginning of the game I am cool about saying, my chances are improved by switching when the time comes.
But can you see my problem about coming into the thing half way through? The article begins by asking about the probability of picking the right door out of two AFTER a random choice has been made. Taking all three doors into account means we are including past events in the calculation (or past phases of the event, if you prefer). But that is selective. Perhaps, unbeknown to me, there were five doors, with three cars and two goats, and two car-doors were eliminated which I never heard about. That would reverse the probability. Taking the past into account is therefore dangerous. This is just instinct, but I sense your idea of viewing both parts as one event is only legitimate when you stand back and look at the whole thing, not when you are standing in the middle with one part done and the next part to be thought about.
But I WILL take your advice about asking a maths prof. --Doric Loon 18:48, 24 January 2006 (UTC)

If you come in half way through (two closed doors, one open, player having originally picked one), unless you know what has happened you would likely think the probability is 50/50. It's not. In the Markov case the next event (the next coin toss) is an independent random event, unrelated to previous coin tosses. In this case, the probabilty of the player's chosen door having the car is related to the conditions in effect at the time this choice was made. This is the only Markov event. By varying the initial conditions, we could make the probability with two doors left anything we'd like. Start with 100,000 doors and one car. You choose one. The host opens 99,998. There are now two left. It doesn't matter whether you watched this happened from the beginning, came in with 50,000 closed doors, or only at the very end. The initially chosen door has a 1:100,000 chance the whole time (just like when it was picked). At the end the other door has a 99,999:100,000 chance. When there are 101 doors left (the player's and 100 more), as a group the 100 that aren't the player's have a 99,999:100,000 chance so each individually has a 9,999:100,000 chance. Start with 100,000 doors and 99,999 cars. The host opens 99,998 doors with cars. Now the selected door has a 99,999:100,000 chance the whole time (just like when it was picked) and the other door has a 1:100,000 chance. The point is opening the doors has no effect on the probability when the player picks, and unless we reveal enough information to remove any uncertainty (making the "probablity" 1 or 0) this probability doesn't change (if doors are not randomly opened). -- Rick Block (talk) 21:36, 24 January 2006 (UTC)


OK, I've got it. In maths I'm a plodder, but even plodders get there. Thanks both. --Doric Loon 07:11, 25 January 2006 (UTC)

The fallacy of distinct goats

Just wanted to note something:

  • The player picks goat number 1. The game host picks the other goat. Switching will win the car.
  • The player picks goat number 2. The game host picks the other goat. Switching will win the car.

Umm.. let's say:

  • The player picks the car. The game host picks goat number 1! Switching will lose.
  • The player picks the car. The game host picks goat number 2! Switching will lose.

The chances are now 50-50. As it logically would be. Your chances of winning the car at all has actually increased your chance of winning from 1/3 to 1/2. Now you have 2 choices and one of them contains a car.

I'm afraid not. You are confusing the fact that four possibilities can be enumerated separately with the idea that they must all be equally probable. That is not the case; since the last two possibilities you mention are both dependent upon the player initially picking the car, they can only divide between them the cases where that in fact occurred. Thus, the possibilities could be written like this:
  • The player picks goat number 1: 1/3
  • The player picks goat number 2: 1/3
  • The player picks the car (1/3) AND the host picks goat 1 (1/2): 1/6
  • The player picks the car (1/3) AND the host picks goat 2 (1/2): 1/6
If you still don't see how absurd it is, then let me pose this: when the door is opened, a goat could be clean or it could be dirty. So that means there are four goat possibilities: the host could pick a clean goat 1, a dirty goat 1, a clean goat 2, or a dirty goat 2! Just by considering whether the goat is clean or dirty, we've elevated the chance that the player initially picks the car to four out of six! Now you're saying to yourself "that's ridiculous -- whether the goat is clean or dirty can't change the probabilities!" Yes, exactly right -- and neither can considering which of two identical goats Monty shows change the probabilities. -- Antaeus Feldspar 16:04, 25 January 2006 (UTC)
Since you brought it up in my Discussion Post, I just wanted to point out that the 2 goats actually are physically seperate. Solving a problem which involves some abstraction where the 2 goats are just 1 "not car" with a 2/3 chance of being chosen is not solving the original problem. It is solving a paralell problem which only gives the same answer when specific conditions are met: namely that both only winning the car matters and the goats are revealed with the same probabillity and are both identical in being not goats. All the wiki explanations depend on this abstraction and give no mention of the dependencies which need to be met in order for this approach to work. Considering the goats seperate (as they actually are) leads you to the 2/3 probability to get the car by switching. The preceding unsigned comment was added by 69.180.7.137 (talk • contribs) .
No, I'm sorry. You're really just wasting our time, here, because you keep introducing factors which are not in the problem and claiming that the problem hasn't been fully discussed if we aren't addressing your introduced variations. When we talk about a probability puzzle and we say "X rolls a six-sided die" we do not need to explicitly spell out that the die is not loaded. If the puzzle states that the die is loaded, then we address that factor, but it is just useless complication to insist that in all cases it be explored what happens if the die is loaded or if Monty really loves one goat a lot more than the other or whatever else it is. -- Antaeus Feldspar 18:18, 21 February 2006 (UTC)
I'm not sure anymore whether the argument is that switching improves your odds, or whether the whole business is a scam. It reminds me of the card game scam that Wednesday uses to cheat the waitress in American Gods. One thing I didn't like is that the sums of the probabilities at the bottom of the probability diagram add up to 200%, when they are only allowed to go to 100%. ie. The odds when player picks goat x and does whatever should only be a portion of the original slice. By the time you get to the end, you are pretending that you have 2 pies.
Pick goat 1 and switch: 1/6
Pick goat 1 and stand pat: 1/6
Pick goat 2 and switch: 1/6
Pick goat 2 and stand pat: 1/6
Pick car, see goat 1 and switch: 1/12
Pick car, see goat 1 and stand pat: 1/12
Pick car, see goat 2 and switch: 1/12
Pick car, see goat 2 and stand pat: 1/12
If you choose blindly between the last two doors, your odds are winning are 50/50. This stays true regardless of how many doors you initially begin with. That must be why the show has the 'no switching once you picked' rule, otherwise they'd give away a lot of cars. Of course, you don't have to choose blindly, in the example you are allowed to choose the door with better odds. -- JethroElfman 23:37, 1 February 2006 (UTC)
Frankly, I think the diagram is not too good, and should be replaced with two trees -- one showing what happens if you use a "switching" strategy each time, one showing what happens if you use a "staying" strategy each time. The current diagram seems to confuse a lot of people into thinking that staying or switching is going to be picked for them, which is of course not the point. -- Antaeus Feldspar 02:51, 2 February 2006 (UTC)
I've tried to draw a full scenario for this, and my finding is both switching and not switching have the same odds of 50%. In total, I have 24 scenarios : half of them will result in lose, another half is winning scenario. Here it is :
Car is put at Door (D)1 - Player picks D1 - Host picks D2 - P is not switching - WIN
Car is put at Door (D)1 - Player picks D1 - Host picks D2 - P is switching - LOSE
Car is put at Door (D)1 - Player picks D1 - Host picks D3 - P is not switching - WIN
Car is put at Door (D)1 - Player picks D1 - Host picks D3 - P is switching - LOSE
Car is put at Door (D)1 - Player picks D2 - Host picks D3 - P is not switching - LOSE
Car is put at Door (D)1 - Player picks D2 - Host picks D3 - P is switching - WIN
Car is put at Door (D)1 - Player picks D3 - Host picks D2 - P is not switching - LOSE
Car is put at Door (D)1 - Player picks D3 - Host picks D2 - P is switching - WIN
(repeat the same scenario for Car is put at Door 2 and 3).
So, in total, we will have 24 scenarios with :
6 scenarios of player switching and win
6 scenarios of player not switching and win
6 scenarios of player switching and lose
6 scenarios of player not switching and lose
Therefore, player's strategy to switch and not to switch stand equal chance to win (50%)

202.152.170.254 10:12, 13 February 2006 (UTC) Hartono Zhuang

The events are not equal probability. For example, according to the above, when the car is at D1, the player will pick D2 2/8 times (1/4), but D1 4/8 times (1/2). The player has no reason to pick D1 more than door 2. This is a common fallacy: distinct events feel like they should have all the same probability. Clearly, here, they don't. --Mike Van Emmerik 10:32, 13 February 2006 (UTC)

Markov again

Coming back to Markov. I suspect the reason most people have difficulty is because at school they were taught the Markov principle (usually not under that name) and this really does look like it at first sight. Fooled me for long enough. As Antaeus Feldspar says above, it is an optical illusion in this respect. The article doesn't really help here. At the top of the "Aids to understanding" section it points to this issue, saying that "The most common objection to the solution is the idea that, for various reasons, the past can be ignored when assessing the probability." But I am not sure that what follows really helps most people see why it cannot be ignored: I certainly read it the first time with a sense of frustration that the key point was evading me. For me the eurika effect came with Antaeus' pointing to the (in retrospect obvious) fact that the car can't move. I've been mulling over how to explain the difference between Markov and Monty Hall. I think the difference is that after I toss a coin, I don't toss it a second time from where it landed, but rather I pick it up first: the way it landed last time doesn't affect the next toss because I return it to a neutral position before the next event in the sequence. The equivalent of returning to a neutral position would be if, after the first door has been opened, the game organisers were to remove the car and remaining goat and reallocate them by a random principle. THEN the second phase would be unaffected by the first, and that would be Markov. But they don't. What everyone understands is that if we play the game three times, the first guess will be right once and wrong twice; what is so easy to miss is that that cannot change unless the car moves. Now this all seems so obvious, but it is in fact the massive blind spot which makes the optical illusion trick people. I wonder if it would be worth having a short paragraph on Markov in this article, and discuss the difference properly - and perhaps less chaotically than I can do it. --Doric Loon 15:24, 31 January 2006 (UTC)

I think you've got a good point, Doric. I'm not even sure that really is the most common objection, either: my experience with explaining it to people tends to be evenly split between those who think that probability automatically corresponds to the size of the sets, and those who think that Monty's removing a door has "reset" the probabilities and has generated a new random event, with probabilities independent from previous random events. Though... now that I typed that out, I'm wondering if they're really the same thing, after all. -- Antaeus Feldspar 15:37, 2 February 2006 (UTC)

Staying or switching at random

"Note that switching at random is quite distinct from just keeping the original choice. Having arrived at A, B, C, or D, if the contestant then blindly flips a coin to choose whether to switch or stand pat, then there is a 50% chance of ending up with the car. However, the choice doesn't have to be made at random. The coin flip gives a 50% chance of being the option with 1/3 likelyhood, and 50% of being the option with 2/3 likelyhood, so they balance."

This has nothing to do with the Monty Hall problem. The Monty Hall problem is about whether choosing a particular strategy can increase or lower your chances. Talking about what would happen if a coin flip decided your strategy for you only confuses the issue and makes it harder for people to understand the real problem. -- Antaeus Feldspar 16:02, 2 February 2006 (UTC)

Yes, perhaps the old diagram was just irritating me too much. It showed the odds at 50/50 and was confusing. Still, I think people consider choosing at random to be strategy in itself and extrapolate from that to the mistaken notion that both doors are equal. The number of cries for help here on this talk page indicates to me that there's still edits to be done so the article makes its point better. That's my idea with the random thing; to tell people that their gut instinct of it being 50/50 is correct, but only if they choose were choosing blindly. If the page keeps any of the new diagrams I'll make better images to replace these quickies. -- JethroElfman 02:47, 3 February 2006 (UTC)
I agree there's still room for improvement. I think the card game experiment is maybe the best aid to understanding, since people can actually carry it out themselves and see why Monty opening a goat-door doesn't change any probabilities. Would anyone object if I moved that to the top of the "Aids to understanding" section? -- Antaeus Feldspar 16:00, 3 February 2006 (UTC)
Hey, I think that would be a good move. In fact, I would eliminate the 3-card version (or make it the extrapolation) and begin stratight away with a 52 card pack where the objective is to find the ace of spades. It is important to get this done early because the length of the article means people may not make it to the end. JethroElfman 16:58, 3 February 2006 (UTC)
I would have to disagree about making the 52-card rather than the 3-card version the primary version; I've seen a lot of people respond to attempts to prove the 2/3 result through simulation with "well, if your simulation comes out with what's obviously the wrong result, it proves that you mis-programmed your simulation!" I'd rather make the simulation correspond as precisely as possible to the actual problem so that there's less room for people to think that some significant factor differs between the door version and the car version. -- Antaeus Feldspar 18:00, 3 February 2006 (UTC)
Additionally, what about a "Common misperceptions" section, explaining common ways in which people either misunderstand the ground rules of the problem or misunderstand the effects of those ground rules on the probabilities? -- Antaeus Feldspar 16:02, 3 February 2006 (UTC)

Simply.

Looking only at winning posibilites.

If you are going to swap you must pick a goat in the first place and your odds are 2/3 of doing that. If you are not going to swap you must pick the car in the first place and your odds of doing that are 1/3.

--81.79.90.68 14:55, 4 February 2006 (UTC)Tim Robinson.

Tricky Host Scenario

I'm not sure I would switch. The Monty Hall problem states that the host allows you to switch only AFTER you have picked a door already (It was not part of the initial rules). So if the host knows that you picked a door with a goat, he COULD directly open that one. This would give you a much reduced probability of getting the car, and you could only get the car if you didn't switch.

No, sorry. That's not possible. Once you pick a door, the host must pick another door to open. He can't choose to open the door that you picked. -- Antaeus Feldspar 02:45, 4 February 2006 (UTC)
But that begs the question, why didn't he tell you about the switch in the first place.
How is that relevant? -- Antaeus Feldspar 03:13, 4 February 2006 (UTC)

Somebody please help - I think I understand the correct answer, but my mind is still struggling to get around the following: Imagine contestant A chooses door 1. Monty hall then opens door 3 to reveal a goat. At this point you introduce contestant B. Contestant B has no prior knowledge of the game. He is told he has been "allocated" door 1, he does not know why door 3 is open. He is effectively in the same position as contestant A, but he does not know that the game is fixed. This time both contestant A and B are offered the choice to switch or stick. Surely the percentage chance for contestant B is 50/50. If so how can the same two doors have different probabilities of a prize at the same time for two people standing in front of them? If contestant B does not have 50/50 why not?

Yes, the chances for B are 50/50; B is playing a different game. For A, there were initially three choices, all equally likely. For B, he sees one open door, with a goat in it. Note also how one of the rules of the game comes into play here: there is always ONE car and TWO goats. But for B, since door 3 is not available as a choice, there are only two choices, one of which has a car, the other a goat. The fact that B has been "allocated" the first door doesn't matter. Summary: A has three choices initially, all equally likely to have the car. B has two choices, both equally likely to have the car. After the host action, A has two choices, NOT equally likely (stay with door 1, or swap to door 2). B arrives after the host action, so there is no before and after for him. Look at it another way: the game is vastly different for B because the whole 1/3 chance that door three might have the car has been taken away, and distributed randomly into the two remaining doors. Lucky B! --Mike Van Emmerik 21:52, 6 February 2006 (UTC)
Oops! I agree I was wrong. Sorry for the misinformation. It's so easy to let your intuition lead you astray with this one. B's chances are the same as A's, because the allocated door is not chosen randomly, nor is the door which is shown to be open. --Mike Van Emmerik 22:18, 7 February 2006 (UTC)
If I may interject -- there is a very big difference, which I feel we're muddying here, between what one's chances actually are and what one perceives one's chances to be. B's chances actually are 2/3, the same as A. B may perceive his chances as 50/50, based on the information available to him, but we know in this case that there is important information that B does not have which changes the odds. If B, on some random whim, decided to adopt a strategy of "I'll always pick a door other than the one allocated to me", he would win 2/3rds of the time, even though his incomplete knowledge suggests incorrectly that he should only be winning 1/2 of the time. -- Antaeus Feldspar 23:53, 6 February 2006 (UTC)

No, B's chances of correctly choosing the winning door, when fully utilizing the information available to him, is 50/50 because he cannot differentiate one door from the other—that is, he doesn't know which of the two doors you picked. What is Monty's chance of picking the correct door, when fully utilizing the available to him? It's certain, of course. So you cannot make the case that the odds can't change, because they are distinct between differing amounts of knowledge. Of course this is the case. A's chances are 2/3 because he is fully utilizing the information available to him. In his case, the extra bit of information is that he knows that of the two doors you didn't pick, Monty will never open the winning door for you, thus his actions are constrained by the fact that he knows something and his actions tell A partly what that information is. The one thing that A doesn't know that Monty knows, is anything more about the door he initially chose than he ever did. The odds of that particular door being correct are 1/3, which they have always been, and will continue to be.

I wrote the first specific web page on the MHP ten years ago, it has been referenced widely, and I have corresponded with countless people about this. No one way of presenting this problem is always effective at realizing comprehension. But one approach has had good success, and that's the million door version of the problem. If you pick a door from among a million, obviously your odds of it being the winning door are a million-to-one. However, if Monty opens all remaining 999,999 doors excepting the winning door, and presents you with two doors, you choice and the remaining door, is it better to switch or to stay? And if someone else walks along at that moment, not knowing anything about which door you picked and which doors Monty opened, their best odds for correctly picking the right door are 50-50. But the other guy is nearly assured of choosing the correct door—it's the one in almost a million doors that Monty very specifically, knowingly, did not open. There's a very small possibility that he didn't avoid any particular door because your door was the correct door. But the chances of you having correctly chosen the winning door from a million is 999,999 to one. It's extremely unlikely. (kmellis@kmellis.com) 69.254.138.180 06:27, 7 February 2006 (UTC)

We need to remain clear about this. B's chances of winning if he follows the same strategy as A are 2/3, just as A's chances are. The fact that he has no way of knowing that there is an optimal strategy or what it is doesn't alter the chances he would have if he followed that strategy. We might as well flip a double-headed coin and say that B has a 50/50 chance of being right if he guesses it'll come up "tails".
Now if you ask a different question, which is "can an optimal strategy be deduced from the information which B has?" then the answer is "no". But two people in a row stated that B's chances are actually 50/50; they are not. The question is "is there an optimal strategy?" not "would someone deprived of certain vital pieces of information about the game know that there is an optimal strategy?" -- Antaeus Feldspar 15:41, 7 February 2006 (UTC)
Very true. In this scenario, nothing has changed; B is still better off switching. But for all B knows, the game is designed such that he's always allocated the winning door, another door is opened, and the host tries to trick him into switching. Or perhaps (in B's mind, anyway) the game is that the host always picks the correct door, then he's "allocated" one of two doors, neither of which will ever win. He simply doesn't know.
Still, the laws of probability don't change simply based on what you know. But as far as B can tell from the available information, switching or staying might be a better option, or it might not make a difference. This means his chances balance out to 50/50 (I think), but only if he has a 50/50 chance of switching or staying (1/2*1/3 + 1/2*2/3 = 1/2). And all it takes is telling B "you should always switch doors" to bump his odds back up to 2/3. – Wisq 16:07, 7 February 2006 (UTC)
"...laws of probability don't change simply based on what you know". In this sense, they certainly do. By the statement of the problem with regard to A and B, B enters the room after both A and Monty have made decisions. B has no information with which to differentiate A and B and has exactly a 50-50 of picking the winning door. I don't understand why people would try to determine some Platonic ideal probability value for the door and the prize independent of someone making a choice. It makes no sense. Or, if it doesn, the probability is null. If I flip a coin that is perfectly constructed, catch it in my hand and look at it, and you try to guess which side it is, based upon what you know, you have exactly a 50-50 chance of being correct. If I "guess", based upon what I know, I have a 100% chance of getting it right. There is no independent value for the probability of guessing outside the context of someone guessing.
In the Monty Hall Problem, you can understand the varying probabilities by evaluating what each person knows: Monty, A, and B. Monty knows everything there is to know, and B knows nothing (other than that there actually is a prize behind only one door). A, however, has a 66% probability of having been told by Monty's actions everything that Monty knows, specifically where the prize is. A third of the time Monty tells A nothing. 2/3 of the time, he tells A everything. If A bases his decision on the assumption that Monty has told him everything, he will win 2/3 of the time. He will only lose when his first choice of door was correct.
There's a couple of ambiguities in how you guys are talking about this. The first is the implicit assumption that we're talking about whether or not A or B knows there is a winning strategy. That's a very confusing diversion. It's sort of a meta-MHP. In discussing the problem, we're taking a God's eye view of the matter and are evaluating the odds of picking the winning door if using various strategies. In that context, all evaluation assumes that A assumes that a particular strategy is the winning strategy and acts upon it in each respective tree. That you want to talk about where the winning door is likely to be depending upon whether or not A knows the optimal strategy is a mixing of levels of analysis. And even though you know and understand, at least to some degree, the correctness of this answer to the MHP, your desire to mix levels is indicative of exactly why people's intution about the problem is misleading. They don't know what perspective to take. Or, alternatively, they implicitly take B's perspective even though the problem statement allows them A's perspective. Or, alternatively, they attempt some sort of ideal perspective, which they assume is essentially B's. Here's why that's misleading: they could also incorrectly choose to take Monty's perspective (that is, they know where the prize is). What do the concepts of "staying" and "switching" and "winning" mean in that context? There's still "winning", but the concept of staying/switch seems absurd because it's deliberately ignoring the fact that from this perspective you already know where the prize is. Similarly, assuming B's perspective also renders the concept of saying/switching meaningless.
The reason we don't say that probabilities vary by "how much someone knows" is because when we evaluate probabilities we're assuming that everything that is possible to be known in a given problem statement is known, excepting the outcome. (We even do this with regard to past events, which is dubious, but that's a different discussion.) The MHP quite clearly states the problem as an evaluation of probabilities from A's perspective. Thus we assume that A knows everything that A can know and we proxy for A in evaluating various strategies. In doing so, we learn that when we're (or A is) in that exact situation, switching will on average allow us to win 2/3 of the time. If you want to talk about something that is very like the MHP but takes a different perspective, then you must very deliberately state the problem from that perspective so that it's clear how one should evaluate it to discover the answer.
I don't understand why people would try to determine some Platonic ideal probability value for the door and the prize independent of someone making a choice. Because that is the only way to compare apples to apples, instead of apples to oranges. If you look at the suggestion by 193.129.187.183 here which started us down this whole road of discussing B, you'll see that he/she asked "how can the same two doors have different probabilities of a prize at the same time for two people standing in front of them?" It was therefore important to clarify that in terms of what actual probabilities the doors had, they did not have different probabilities. It's only when you change the question from "what is the actual probability of the two doors" to "what is the probability that this person will find the winning door" that you actually see the probabilities being different for different people. -- Antaeus Feldspar 01:42, 9 February 2006 (UTC)


I think you should carefully consider what, if anything, your statement about "the 'actual' probability of the two doors" could possibly mean. To shortcircuit the Socratic method, I'll just claim that if it means anything, it means that there is a probability of 100% that the prize is behind the door that it is behind. This system taken in isolation isn't probabilistic, it's already determined. The only probabilistic perspectives are those where an observer has less than complete information. And each one of those perspectives constitutes an independent problem. A, having picked a door and then watching Monty opening a door and knowing that that implies, should switch, and he'll win 2/3 of the time. B, not having picked a door, nor knowing which door Monty opened sees only two doors that he cannot differentiate from each other in any way. There can be no "switching" or "staying" in a problem statement for this B fellow, the best he—and we—can do is say that if he randomly chooses a door, he has a 50% chance of chosing the right door. Finally, Monty, who knows where the prize actually is, would pick the winning door because he knows which door the winning door is. A problem statement about Monty would be something like "should Monty pick the door he knows hides the prize, or the door he knows hides the goat?" And of course the answer to that is that he'll always win if he chooses the winning door and he'll always lose if he chooses the losing door. These are three different problems. Only Monty's perspective is arguably the supposedly privileged perspective; but I suspect the most rigorous analysis would show that the only thing you could possibly mean by thinking of a privileged view, an inherent probability between the two doors, is a determined system that is completely known. You always "win" (Keith M Ellis, kmellis@kmellis.com, www.montyhallproblem.com).
Your analysis is correct but is only apt to confuse people by leading them away from the actual crux of the problem. Yes, if we have only as much information as B does, we only have a 50/50 chance of picking the right door. Yes, if we have the same information that A does (and apply it optimally, of course) then we have a 2/3 chance of picking the right door. And yes, if we have the same information that Monty does, then we have a 100% chance of picking the right door. What those who are not grasping the problem yet struggle with, however, is not why B's chances are 50/50 but why A's aren't. There are two ways to clarify this: One is to say "Monty gave A information about which door doesn't have the car; B actually has this same information too, but he is missing information about what happened before the number of doors was reduced to two, so he doesn't know that there were three different ways that they could have arrived at two doors and two of those three ways result in the car being behind Monty's door." The other is to go straight to the heart of the matter and say "There were three different ways that they could have arrived at two doors and two of those three ways result in the car being behind Monty's door." Obviously I think the latter is preferable. -- Antaeus Feldspar 18:28, 9 February 2006 (UTC)

(unwrapping back to left:) You state the following:

B, not having picked a door, nor knowing which door Monty opened sees only two doors that he cannot differentiate from each other in any way. There can be no "switching" or "staying" in a problem statement for this B fellow, …

But the scenario we were discussing states the following:

Contestant B has no prior knowledge of the game. He is told he has been "allocated" door 1, he does not know why door 3 is open. He is effectively in the same position as contestant A, but he does not know that the game is fixed.

Hence, there is indeed the concept of switching, due to the "allocation" of a door. If B picks to switch or stay randomly, he has 50-50 chances. If you tell B to switch, he has a two thirds chance. This is why I say the probability (of winning via switching versus via staying) does not change based on what you know. If you want to say that B doesn't even know what door has been allocated, you may, but that's a whole different problem. – Wisq 04:26, 10 February 2006 (UTC)

This page is monstrous

I edited this page some time ago to clearly state the problem and the answer. Since then it has been edited back into a state. The page is a mess because there is no single, clear problem and answer statement. It is long and rambling. There are several statements of the gameshow, alternative versions, multiple versions of the answer, anecdotes, and all manner of irrelevant nonsense, and the important information is simply lost. This is an example of an article where the lack of an authority and a team of expert researchers, as is found in a "real" encyclopaedia, leads to a polarisation of opinion and hence more and more verbose explanation in order to convince those who fail to understand the right answer to prevent them from editing incorrectly.

By the way, the correct answer is yes, you should switch. If you don't believe it then set up the game yourself with an accomplice, try it 100 times and see what answer you get. Alternatively, use a computer simulation, or even do the probability theory from first principles (but do it properly). Any mathematical explanation that reaches a different answer has a flaw in it. Ignore your intuition. PK

I agree, but aside from deleting most of the page and then locking it, how can you solve this? Besides, although we know the correct answer, it can take a lot to convince someone else that their intuition is wrong — typically by explaining it to him or her in exactly the right terms. By debating it on the talk page, explaining it different ways until the opposing party "gets it", and then putting that method on the page, we have organically come up with a system that (judging from the reduced debate on this page) convinces most people of the correctness of the answer.
I would rather a long and overly verbose page, that expresses the truth in a way that almost anyone can understand, than a short and concise page that most people would look at and think "that's bogus" — and either edit it, or just walk away convinced that Wikipedia is full of lies. – Wisq 17:13, 10 February 2006 (UTC)

Dual data sets being ignored

On these types of problems, there are dual data sets at work. Set one is the information which allows us to determine the "odds" of finding the item being sought. Set two is the information which allows us to determine what the underlying statistical distribution of winning choices is. Originally, the "odds" of both data sets are the same, but they do deviate when additional information is acquired. The additional information is provided by the certainty that the removed choice is a loser. Because of this, the choosing party is no longer making a guess, but instead is making an informed calculation. Please look at the definition for guess "To predict (a result or an event) without sufficient information". Please take note that when one has sufficient informatiton, one is no longer guessing. The removal of one choice provides us with more information and we move from the position of a mere guess to that of an educated guess, which is not the same thing. Now as to the "statistical distribution" angle: If you have 10 shoe boxes on your desk and one of them contains and egg, the distribution percentange is 1/10 or 10%. Those numbers never change. However, when we gain more information about the contents - say by opening a few, our odds of finding the egg increase. People argue about these problems because of the tricky idea that there is true "guessing" involved when there is not. And also because they forget that the numbers regarding the original distrubution of winning choices is fixed. Only the odds of finding the item improve, not the statistical likelyhood that it actually existed. Merecat 05:48, 11 February 2006 (UTC)

It seems that no matter how this is explained (including your logical choice of words) the argument will not end. There are some who will follow the theatrics rather than the logic no matter what. Give it a go in the article if you think it will help. hydnjo talk 01:11, 12 February 2006 (UTC)

Another way to explain it

If you can accept this following fact, it could be easier to see that switching will make the probability for getting the car 2/3.

Fact: If you choose a door with a goat behind, switching will gett you a car, if you choose a door with the car behind it, swithing door will gett you a goat.

Lets take it from the start. You choose a door, lets say door C. Its now a 1/3 chanse that you choose the door with the car behind it. The host now revals one of the door with the goat behind, lets say he revals door A with the goat behind. Its now ONE goat behind a door wich you cant see, and ONE car behind a door you cant see. You now have 2 doors witch you dont know whats behind, the only thing you know is that its one car and one goat left and they har hiding behind these two doors. (Just to make it perfectly clear, it COULD be that behind door C there is a goat, and therefore behind door B there is a car, it also COULD be that behind door C there is a car and therefore there is a goat behind door B, it CANNOT be that behind the two doors left there is one goat each, becouse the host has already revaled a door with a goat behind). Now comes the fact that will make it easier to understand: If you switch door, its now GUARANTEED that IF its a Goat behind Door C ( the door you first choose) you are swithing to the door with the car behind (door B), IF its a car behind door C then its GUARANTEED that you are switching to a goat (door B).

As you first choose door C it was 1/3 chanse of selecting the door with the goat behind, when you swicth its GUARANTEED that you are swithing to a door witch the content not being the same as the inital door you choose (read that sentence twice). And therfore if you switch its 2/3 chanse of getting a car.

That seems like a pretty convoluted way of saying "picking a goat and switching will get you a car, and you have 2/3rds chance of picking a goat". :) Really, that's the absolute simplest explanation. – Wisq 00:10, 14 February 2006 (UTC)

Oki, I agree, what I was meaning was here is another way to explain it.

Disagreement with explanations

No I don't agree that after seeing a goat you have a 1/3 chance of winning. I think after seeing a goat you have a 1/2 chance of winning (in a situation without the monty reaction type thing). If you had 1/3 chance to get each and one is eliminated, then do you have 1/3 chance to get the remaining? No, its 1/2 each. Bayes rule: (1)*(1/3)/(2/3), trials: 33 1, 33 2, 33 3, cross out all 3s left with 33 of each, 66 historical outcomes 33/66 = 1/2. Now I'm not saying it doesn't makes sense that monty is giving you information by reacting to your choice. But I wan't a precise explanation of this and one that depends on a 1/3 chance to get the car at a point where you should be looking at it as a 1/2 chance doesn't cut it for me.

Someone show me a version or explanation of Bayes formula where it talks about how you got the information given being signifigant. I can see how this idea might make sense as if you take

(Prob (Outcome 1|Not outcome 3) = .5) * (50% chance you were given Not Outcome 3, given Outcome 1, given not outcome 3 (irrel. given outcome 1)) = .25

(Prob (Outcome 2|Not outcome 3) = .5) * (100% chance you were told Not Outcome 3, given outcome 2, given not outcome 3 (again irrel.)) = .5

This is a totally different explanation because you are not using a 2/3 probability to choose a goat after you should be using a 1/2 probability to have chosen a goat. The .75 total represents the probability (Outcome 1 or Outcome 2) given (not outcome 3) given (not Outcome 3 Was given at a 50% chance if Outcome 1 or 100% chance Outcome 2). The .25 Compliment represents the meaningless probability (Outcome 1) given (not outcome 3) given (not outcome 2 was given at a .5 probability) ... which is meaningless in this case because not outcome 2 was not given.

The more general form of Bayes' theorem I used here for use when how often the information given is correlated with the outcome:

Prob(A|B) = (Prob(B|A)*Prob(A)*Prob(B was given|A))/(Prob(B)) / ((Prob(B|A)*Prob(A)*Prob(B was given|A))/(Prob(B)) + (Prob(B|A)*Prob(A)*Prob(B was given|A compliment))/(Prob(B))

Now I can figure out any similar game with differing amounts of money behind doors or no matter what monty does to decide what door to open. It is wrong to say "You had a 2/3 chance of choosing the goat in which case monty has to choose the other one" when the conditional probability of your having chosen the goat given you didnt choose the one he shows you is 1/2.

What happened was in the 1/3 occasion you chose goat 1 and the 1/3 occasion you chose goat 2, he shows you the other goat for sure whereas in the 1/3 occasion you got the car, he shows you the goat you see by 50% chance. That may have happened because there were 2 goats and 1 car, and you had a 2/3 chance to choose a goat until you see otherwise because there were 2 goats and 1 car, but it did not happen because you had a 2/3 chance to choose a goat at the time of the final decision.. you did not.

You're overcomplicating this. Just follow this way:
  • Your chance of winning by staying is 1/3, because it can only happen when you pick the car with your first try.
  • If Monty didn't open a goat door, your chance of winning by switching would also be 1/3 -- there would be a 2/3 chance that the car would be behind one of the other two doors, and a 1/2 chance of picking the right one of those doors. (2/3) * (1/2) = 1/3.
  • However, Monty does open a door that contains a goat; by doing that, he eliminates the 1/2 chance of guessing wrong which of the other two doors to switch to. (2/3) * (1/1) = 2/3.
It's really that simple. -- Antaeus Feldspar 01:41, 14 February 2006 (UTC)

Well, its not complicated to begin with (Although I'm sure it looks so since I didn't use the math code for here) Its just a factor multiplied by Bayes' Theorem to adjust for correlation between the frequency information is given and the outcome.

Second, I think that you and the wiki explanations are oversimplifying things. I believe that it is only dumb luck that this method of approaching the problem gives the correct answer in this case. I believe this because A) there is information being given when the door is open that is being completely ignored: Namely that you did not choose the revealed goat, and B) You are neglecting the fact that you are allowed to make a new decision after he opens the door. Here's a few examples in which the same way of approaching the problem would give you worse answers than you could and should get by reevaluating probabilities after seeing the goat.

1) You are simply given that you did not choose goat 1 (or goat 2) with this information not being given to you in any special way. This would mean a 1/2 chance to get the car by switching.

2) Monty shows you goat 1 2/3 of the time and goat 2 1/3 of the time you choose the car. This results in a 2/5 chance you chose the car if you see goat 1 and a 1/4 chance to have chosen the car if you are shown goat 2.

In the particular case of the original problem it just so happens that you get the same answer by working out the problem correctly as you do by ignoring that you are given information and are asked to make a new decision. But this is dependent on not only that all you care about is getting the car, or that the goats are alike, but also that there is an equal chance he shows each goat if you choose the car. The wiki explanations do not cite any of these dependencies and even if they did it would still be silly. Thats like saying "Multiplication is just like addition see 2*2=4" with the additional note that this approach only works with 2s. - T. Z. K.

Your scenario 1, as you describe it, is simply not the Monty Hall problem. I assume that by "with this information not being given to you in any special way" you mean that Monty does not open a door and show you Goat 1 or Goat 2 as appropriate; he just tells you that the door you picked doesn't have Goat 1 behind it (or Goat 2 as appropriate.) Well, the point of Monty opening a door is not to identify where a particular goat is (see "The fallacy of distinct goats above"), it's to identify one location where the car isn't.
Your scenario 2 is also not the Monty Hall problem, albeit in a subtler way. If we had the additional information that Monty picks one goat over the other to show 2/3rds of the time when he has a choice of two, then yes, we could get an even more precise estimate of what the chances are that we chose the car in the first place. However, we don't have that information.
In short, you introduce all sorts of factors which are not part of the original problem, and then say "See? Your so-called 'answer' only works out for the original problem -- not to all these different variations I can create by introducing new factors! Doesn't that prove that your answer only works out by "dumb luck"?" No, our answer works out because it is the answer to the original problem. -- Antaeus Feldspar 01:59, 18 February 2006 (UTC)
Actually what I was claiming is that the explanations in the wiki are dependent on a whole lot of things in order to work that it makes absolutely no mention of and are far from obvious, in addition to the fact that it is pointless to use approaches which only work in very specific cases. What you are claiming you should teach children that ask what mutliplication is so they can answer 2*2=4 that mutliplication is just adding the numbers, and then when they complain that they got 2*3 wrong tell them "Oh that wasn't the original problem." You never told them that approach only works for 2*2 did you? I already outlined myself that because the goats are identical and both not the car, AND because he reveals each goat with an equal chance (something most people don't realize) then you just happen to get to the right answer by this rediculous approach of ignoring all given information and the chance to make a new decision.
As for your straw man "fallacy of seperate goats" argument, it is important to realize the argument being made here is not that this situation gives a different probability if you pretend the goats are not 2 physically seperate entities and make some abstraction claiming them as one thing. The point is regardless of weather or not they are seperate, you have to identify that they are in every way exactly the same including how often they are revealed before you can consider them not seperate. And to do that, you have to solve the problem (or a similar one in the past) THE CORRECT WAY to verify that it gives the same answer if they are not seperate.
It seems like the biggest problem the people supporting this type of solution are having is the belief that they can some how intuitively know that abstracting the goats "as not seperate" gives the same answer. However if you ask these people answers to similar problems it becomse obvious that this "intuition" is fallable (as they often are), specifically that they never realize that the goats have to be revealed with the same probability in order for abstracting them as one to give the correct answer. You should never just assume that other people know a priori why solving a parallel problem in the place of a certain problem gives the correct answer. You should also realize that if your "intuition" is capable of discerning when its ok to solve a parallel problem instead of the actual problem, it is partly because experience doing so in cases which you believe are similar it has given the same answer.
In the end its really this simple: You can have a solution that only works in this specific case and doesn't explain why it works in this specific case and not others and is no more simple than the general explanation, or you can have a general solution that solves this game and any variation or similar situation. This is the sense in which the wiki explanations are "wrong". You are basically claiming that solving a parallel yet different problem gives the same result, and I am saying why solve a different problem? -T.Z.K.
You already used the "telling them multiplication is the same as addition (in this one case)" analogy and that analogy was already rejected as being inapplicable. You've offered nothing that makes it any more applicable. You've simply repeated claims that the article's solution only works "for this specific case" and then proceeded to list various cases with completely different rules and therefore completely different answers. This rather misses the point of why the original problem is of interest: namely, that even without all the added (and unneeded) complications people's intuition causes them to assume one 'answer' when the correct answer is quite different.
Actually the multiplication analogy was never addressed, nor can I see how you would seperate it from the current situation.
By the way, your insistence that "the goats have to be revealed with the same probability in order for abstracting them as one to give the correct answer" is at best highly misleading, if not completely wrong. The closest it comes to being correct is: "If Monty had some system for picking one out of two goats in those situations where he can actually choose between two goats and the player is aware of the exact probability distribution in Monty's system and that probability distribution is uneven and if the player can distinguish between the goats then the player may devise a strategy on when to switch and when to stay that outperforms the best strategy available in the case where the goats appear identical to the player or are treated identically by Monty." If any one of those factors is missing -- for instance, if the player knows that Monty has a favorite goat that he will always pick when he has a choice between the two but the player doesn't know which goat is which -- then the result of the problem is exactly the same as if there was no difference between the goats. Just because when all these factors are present (which none of them are in the original problem) the answer is different does not by any reasonable standard mean that something is missing when this particular extrapolation from counter-factuals is not covered. -- Antaeus Feldspar 18:05, 21 February 2006 (UTC)
Actually it is correct, and your belief that it isn't goes far to demonstrate why your explanations are incorrect and misleading. If Monty shows the goats with different probabilities when the car is chosen, but the player doesn't know which goat is which it most certainly will not give the same result as if he is equally likely to choose each goat. What it would do is give an average of the probabilities of success of case goat 1 is revealed and case goat 2 is revealed. In the "Monty has a 2/3 chance to reveal Goat 1 if car is chosen" example, there would be a (2/5 + 1/4)/2 = .65 chance to succeed by always switching. It doesn't matter whether or not you know what Monty's strategy is. Think about it, if you always switch and conduct an experiment how could your knowledge of Monty's strategy which physically changes nothing have a bearing on the percentage of the time you get the car by switching? This was what I meant by the Bayes' Theorem assumption idea. It doesn't matter weather or not you know that the frequency with which information was given is correlated to the outcome. Also it is in fact a multitude of factors which must be the case for the wiki explanations to arrive at the correct answer.
Okay, let me walk you through this. Let's say that Monty rolls a six-sided die at the beginning of each round, before he offers the player his initial pick of doors. If the player initially picks the car, Monty uses the die roll to determine which goat to reveal: on a 1 or 2, he reveals Goat #1; on anything else, he reveals Goat #2. (For simplicity's sake, we'll group the possible results from here out as "1 or 2", "3 or 4", and "5 or 6", since these results are equiprobable and each item of each pair gives the same result as its partner.)
We can now draw a table of the results. Across the top is what Monty rolls on the die, down the side is what the player initially picks, and in the cell is what Monty shows and whether staying (ST) or switching (SW) is the winning move.
1 or 2 3 or 4 5 or 6
Goat 1 Goat 2; SW Goat 2; SW Goat 2; SW
Car Goat 1; ST Goat 2; ST Goat 2; ST
Goat 2 Goat 1; SW Goat 1; SW Goat 1; SW
We can count fairly easily and see that when Goat 1 is showing, switching is the correct move 3 times out of 4. We can also see that when Goat 2 is showing, switching is the correct move 3 times out of 5. Now, here's the tricky part: if we did math the way you did it above, we'd add (3/4 + 3/5) = 15/20 + 12/20 = ... 27/20. Yes, twenty-seven times out of twenty, we'd win by switching! =) I hope this stands out fairly clearly as an indicator that something is wrong.
Here's the correct math: If we're looking at Goat 1, our chances are 3/4. However, we're only going to be looking at Goat 1 4 times out of 9 -- so our chances of winning by looking at Goat 1 and then switching are 3/4 * 4/9, which works out to 3/9. If we're looking at Goat 2, our chances are 3/5, but only 5 times out of 9 will we be looking at Goat 2 -- so our chances of winning by looking at Goat 1 and then switching are 3/5 * 5/9, which also works out to 3/9. If we add those two together, we get the total probability of looking at any goat and then switching for the win: 3/9 + 3/9 = 6/9 = 2/3. i.e., exactly the same probability as if there were no difference between goats. -- Antaeus Feldspar 20:31, 21 February 2006 (UTC)
To begin with, I believe you are referring to a typo as my math, (which strangely was already fixed before your post...) and what was supposed to be there is a /2. Anotherwords if you can't tell which goat is which, then you average the success chance of switching from each goat. ...tbc
Actually, I was referring to what you posted here, where your "math" was only incorrect because you were choosing the completely wrong method for assessing the total chance of winning by switching. I did notice that later you "corrected" your "typo" to "(2/5 + 1/4) = .65", which meant that you were getting not only the mathematics but even the basic arithmetic wrong -- or didn't you notice that (2/5 + 1/4) made it .325 and not .65? But as previously explained, neither of these is correct. I think the fact that you are quibbling over us failing to follow your changing of your argument to make it more wrong is an indication of your emotional desire not to be wrong and not a competent judgement of the situation. -- Antaeus Feldspar 15:33, 22 February 2006 (UTC)
So what you rooted through the pages edit history to find a typo that had already been fixed in order to claim it was someone's argument? By the way, you might want to recheck your snotty little failed attempt at correcting my math... (2/5 + 1/4) = 13/20 = 6.5/10 = .65
Unfortunately, articles like this tend to attract mathematicians whose motivation seems to be to make any mathematical subject appear ludicrously complicated, presumably in an effort to boost their own egos. Have you seen the article for Birthday paradox? I mean, that's one that is pathetically easy to explain, but the amount of sheer mathwank in that article is downright obscene. --Bonalaw 11:02, 14 February 2006 (UTC)
... And yet all that "mathwank" (heh!) still can't convince some people that, honestly, seem to know what they're doing (but continually forget something in their own calculations). I suspect that even the simplest non-math explanation ("chance of goat = 2/3; goat + switch = car") would be enough to convince everyone, if only they would read the entire page and retain an open mind. And if that's not enough, run the simulations! They prove it statistically. It becomes much easier to prove it mathematically once you know the correct answer.
Unfortunately, for those certain they're correct (or those who don't read the page), I expect no amount of explanation on that page is going to convince them. Of course, the upside to all that info is that it discourages the "I'm right and you're wrong" editors because there's so much text and so many diagrams to edit; hence, most of the convincing happens on this page, rather than as edit wars on the article itself. – Wisq 17:23, 14 February 2006 (UTC)

You guys are complaining about other people having open minds to explanations you accept, yet to bring this up just because someone disagrees with the explanations means you are automatically assuming they are wrong. For a person can have an open mind and believe an explanation to be wrong *if it is in fact wrong*. Don't be a hypocrite. -T. Z. K.

If you actually read the article (past tense), you probably realize that a lot of people found the accepted solution hard to swallow at first glance (I'm hinting at the "anectodes" section.) Well, that "lot of people who found the solution hard to swallow at first glance" includes many of the people commenting here -- or at the very least, it certainly includes me. However, these people have read the material in the article and agreed with it in the end. In this context, don't you think it's a little unfair to call all of us hypocrites and close-minded? --Gutza T T+ 04:11, 18 February 2006 (UTC)

Actually it is completely irrelevant to it. What is your argument here? I was simply pointing out that it is absurd to call other people close minded for not accepting your views because that requires you to "close out" the possibility that they fully understand your views and still know that they are correct. - T.Z.K.

If you are disagreeing with the explanations, then yes, you are wrong. Sorry to be so blunt, but that's the truth. You can prove it by theory, by math, and by simulation. It is correct. After that, it's just a matter of finding the explanation that "clicks" for you. And that means reading them all, and actively following along rather than skimming or looking for faults. (FYI, I was in the "why is that right?" camp until I understood it, not the "that's wrong!" camp. But it still took some reading until I understood it fully. It's a very counter-intuitive problem, but it's also a very proven one.) – Wisq 07:09, 19 February 2006 (UTC)

To begin with it is clear you do not even know what you are responding to. I believe the explanations are incorrect, but not because they give the wrong answer. They are incorrect because the approaches used in the explanations only work in this specific situation and make no note of why they work in this situation and not others. Therefore it is irrelevant whether or not the outcome has been proven by experimentation. Furthermore there is no indication that claims like "You are wrong" are anything more than an indication of your emotional desire not to be wrong and not a competent judgement of the situation. -T.Z.K.

One more time

Start with the usual. You pick ONE door and the host gets the remaining TWO doors. Stop. Now would you think it a good idea to swap your ONE door for the host's TWO doors? If you say NO, then I have nothing more for you, go away.

But if you say YES, I want to swap my ONE door for the host's TWO doors then you're almost there. That's it, except for the host's theatrics of opening a losing (and he knows it) door. What you're doing is swapping your ONE door for his TWO doors, even if he tried to mess with your head by opening one of them (oh, sure, like he's going to open a car door) and the diversions like the audience screaming "swap" - "don't swap" and the crew saying "cue the flashing lights" and the "host's silly grin". It's all about messing with your head remember. you're swapping your ONE door for his TWO doors.

To dramatize the situation, start with ten doors. You get ONE and the host gets NINE (they would never do this as it would expose the whole thing). Now, the host (between commercials) opens EIGHT of his NINE doors (never ever opening a car door and he knows it). Wacha think now? hydnjo talk 02:25, 18 February 2006 (UTC)

Completeness of the explanations

The lesson to learn from the Monty Hall problem isn't one of mathematics, but of psychology -- that your instincts might be incorrect. With that in mind, the explanations given don't have to be complete. They have these two goals: 1) to be correct, 2) to be easy to understand. I tried wading into the Bayes theorem article and just couldn't make my way, so I don't think it meets goal #2 as a means of explanation.

I find it difficult to see in what way T.Z.K. disagrees with the article. I didn't like the chart that showed probabilities adding up to 200%, so I changed it. If T.Z.K. would like an article that actually is directed to mathematics, then I suggest expected number. I editted the grammar there, but the phrasing of the math is still weak. If he wants to revise wording then he can give that a shot too.

I like the extension he proposed to the problem. If one goat is blue and the other red, and you know that Monty always picks the red goat when he has a choice, then if he opens a door with a blue goat you must switch for it is 100% that your original pick was the red goat; if he shows a red goat it is now 50-50 whether or not to switch.

Such extensions are better off left out of the article lest they add to confusion.

Lastly, I like Antanaeus. He is remarkably patient with those who come here for help. JethroElfman 18:35, 22 February 2006 (UTC)

Thank you, Jethro -- you don't know how good that is to hear. Sometimes, even though I work hard at it, I feel like patience is what I'm worst at! -- Antaeus Feldspar 20:49, 22 February 2006 (UTC)
Well, you're certainly more patient than I — hence my current silence. ;) – Wisq 04:18, 23 February 2006 (UTC)
Antaeus, "The superior man is modest in his speech, but exceeds in his actions." -Confucius hydnjo talk 13:01, 23 February 2006 (UTC)

Number of doors vs. number of ways of getting there

The answer to the problem is NO. The logical fallacy of the reasoning which leads to conclude in the positive solution lies in the fact of considering the second goat when there is no more second goat.

Think of this: two doors remain. One has a goat, the other one has a car. Chances are 50/50. It does not matter as to whether my door conceals the car or the goat. My switching or not is like a new decision, a new choice. In this new choice, I chose to keep my door (which is the same as to say that I re-pick that door), or to pick the other door. The open door does no longer count. The chance to switch creates a new choice with new alternatives, erasing the old ones.

The answer really is YES. Even though there are only two doors left, there are three ways of getting there: by picking goat #1, by picking goat #2, or by picking the car. Two out of those three ways result in switching being the right answer. -- Antaeus Feldspar 21:27, 24 February 2006 (UTC)