User talk:Gill110951

From Wikipedia, the free encyclopedia
Jump to: navigation, search
E-to-the-i-pi.svg This user is a member of WikiProject Mathematics.
Fisher iris versicolor sepalwidth.svg This user is a member of WikiProject Statistics.
Erdős 3 This user has an Erdős number of 3
Hirsch ca. 20 This user has an h-index of ca. 20
e^{i \pi\ } This user is a mathematician.
Societies
BS This user is a member of the Bernoulli Society for mathematical statistics and probability
IMS This user is a member of the Institute of Mathematical Statistics
ISI This user is a member of the International Statistical Institute
VVS-OR This user is a member of the Dutch society for statistics and operations research
KWG This user is a member of the Dutch mathematical society
IoP This user is a member of the Institute of Physics
Standard deviation diagram.svg This user is far from normal.
This user's deviation is not standard
IQ curve.svg
This user is definitely an outlier. Michelsonmorley-boxplot.svg
Wikisphere assimilated.jpg This user has been assimilated. Resistance is futile.


Earlier postings moved to Archive 1, 2, ..[edit]

User_talk:Gill110951/Archive 1 (November 2006 to December 2010).

User_talk:Gill110951/Archive 2 (December 2010 to February, 2011)

User_talk:Gill110951/Archive 3 (February 2011 to July, 2011)

User_talk:Gill110951/Archive 4 (July 2011 to November, 2011)

Essay on Probability Notation[edit]

I wrote, at the suggestion of a fellow editor, during the Monty Hall Problem wars, a little essay on notation in probability theory: [1]. This could be useful for Two Envelope Problem editors, too. Richard Gill (talk) 11:13, 11 August 2011 (UTC)

TEP: The heart of the matter[edit]

Let A and B denote random variables whose joint probability distribution encapsulates our uncertainty as to the actual amounts a and b in the two envelopes. I do not need assume here that A is half or twice B. I just assume that A and B are always different and that their distribution is symmetric under exchange. The following facts can therefore be used for two envelopes (all symmetric versions), two neckties, two-sided cards; with or without subjective probability, with or without finite expectations. The derivation is elementary. The results are not surprising. The point is that they are general results. Many solutions take a particular prior distribution by way of example and show that certain of these facts are true. That is a bit unsatisfactory because it doesn't prove that the results always have to be true, hence leaves a doubt in the mind of the reader. For example, this is why Martin Gardner felt that neither Kraithchik's problem nor TEP were properly solved at the time when he wrote about them. He had only seen particular examples but this does not prove that what we see in those examples always has to be true.

Theorem

  • (1) Under symmetry, E(A)=E(B).
  • (2) Under symmetry, if E(A) is finite, then it is impossible that E(B|A=a) > a for all a.
  • (3) Under symmetry, it is impossible that P(A < B|A=a)=1/2 for all a.

Proof

(1) is obvious (symmetry!)

(2) proof by contradiction with (1). If E(B|A) > A then E(B)>E(A) or both are infinite or undefined.

(3) proof by symmetry of "stochastic independence" between r.v. A and event { A < B }. Because if P(A < B|A=a)=1/2 for all a, then the event { A < B } is independent of the random variable A. Now replace A and B by A' = g(A), B' =g(B) where g is a strictly increasing function from the real line into a bounded interval of the real line (for instance, the arc tangent function). All the assumptions we made about A and B also hold for the transformed versions, but now we can be certain that expectation values are finite. From now on, I drop the "prime" and just write A and B for these transformed versions. Consider the trivial inequality E(A-B|A-B > 0) > 0. By finite expectation values, this can be rewritten as E(A|A > B) > E(B|A > B) = E(A|B > A) where the last equality uses symmetry. This inequality shows that A is statistically dependent on the event { A > B }, hence the event { A > B } is statistically dependent on the random variable A. Transforming back to the original variables this remains true.

Corollary (an exercise for connoisseurs/students of probability theory). Let g be a strictly increasing function and let A' = g(A), B' =g(B). Then the theorem also applies to the pair A' and B' . Extend to not necessarily strictly increasing g by approximating by strict and going to the limit (strict inequalities need no longer be strict in the limit). We find

  • (4) The probability distributions of A|A < B, of A, and of A|A > B are strictly stochastically ordered (from small to large).

These facts take care of the main variants of the two envelopes problem as well as all its predecessors two neckties, two-sided cards. The only way to escape the facts is to assume improper distributions. But they are ... improper. In fact, they are: ludicrous, according to Schrödinger, Littlewood, Falk, and just about everyone.

I have also posted this proof on my university home page, [2] Richard Gill (talk) 14:06, 1 December 2011 (UTC)

You say in the introduction that it doesn't matter if the distribution has infinite expectation or not, but in your theorem you explicitly exclude that case. So it does indeed matter. This is the first lie. Then in the end you say that the only way to escape your solution is to assume improper distributions. But hey, do you really think that all distributions having infinite (or undefined) expectations are improper distributions? I know you know better than that. This is the second lie. Also, you have forgotten to make all your assumption explicit, like stating that all values can be mapped onto the (infinite) real line. This assumption is not true in practice why your theorem (2) breaks down in reality. And you don't take account here for the fact that you believe that utility is bounded above at some arbitrary level (as long as it is finite). And where is your opinion represented that you say that we have to truncate the support for some proper distributions to escape the paradox? Here nothing is bounded and nothing is truncated. I guess you are in your math Nirvana now that has nothing to do with reality, right? In that case this is about as interesting as theology. iNic (talk) 08:45, 6 December 2011 (UTC)
Random variables are by convention real valued. The theorem gives information about all cases with proper distributions and real valued random variables, whether or not expectations are finite. Of course the theorem gives a glimpse into a mathematical Nirvana. Application to TEP and other exchange paradoxes has to be worked out, case by case. This is not difficult to do. I am revising my work-in-progress paper on TEP so as to incorporate the results of the theorem. However, I am writing in first instance for mathematicians, not for philosophers and not for lay persons, so I doubt you will be pleased with the results. Still I am very greatful for the discussions with you because they triggered the new result about stochastic ordering. With some colleagues we are looking for further new results. This has opened up a fascinating new avenue of TEP investigations.

I have yet to see an interesting TEP like paradox when we cannot give a numerical utility to the two objects we must compare. I don't see anything interesting I can do in that direction. Maybe sometime you'll provide us with inspiration.

The derivation of (4) does not require finite expectations. And it follows that for any strictly increasing g such that Eg(A) is finite, we have all these results for g(A) and g(B). The application to bounded utility is immediate. Richard Gill (talk) 10:17, 6 December 2011 (UTC)

OK so (2) requires finite expectation but (4) does not?

So you say that these results are easier to apply in real cases than you unified solution? You never managed to show how to apply that in a single case. Instead you started to talk about utility theory and fundamental problems with infinity in real cases. Your theorem was never put to use. I'm glad that you say that these results are much easier to apply. Please show how to apply this in practice. Pick your favorite case.

You still haven't responded to the fish soup situation. Will you pick the other hidden dish or will you stick with the fish soup? This situation isn't symmetric as you already know what one of the dishes are. Your utility for the fish soup is some number X. The expected utility for the other dish is larger than X. What will you do and why? iNic (talk) 13:18, 6 December 2011 (UTC)

So you chose to use real values to solve this problem haunted by infinity related issues just by convention? Nothing in the original TEP formulation requires any infinities, only two numbers, and you throw in a continuum of infinities just like that? (Maybe that's because how you mathematicians count: one, two, infinity...) It is however possible to formulate the paradox in a finite setting and then the real numbers can't be utilized in the solution. iNic (talk) 13:45, 6 December 2011 (UTC)

Please have a look there[edit]

Richard, please have a look there.  –  I know, it's not your style, it's just mine. Nevertheless: is it correct or is it wrong? Regards, Gerhardvalentin (talk) 21:23, 27 December 2011 (UTC)

Re:Please undelete Two envelopes problem/sources[edit]

Re:Please undelete Two envelopes problem/sources

Unfortunately I am unable to undelete this page, as I am not admin - I wanted to help and responded that was never deleted. Talk/Two Envelopes Problem/sources was never deleted. Talk:Two envelopes problem/sources was deleted and it is possible to undelete it.Bulwersator (talk) 07:59, 4 January 2012 (UTC)

Fastily has helped us out now! Richard Gill (talk) 15:24, 4 January 2012 (UTC)

Dispute resolution survey[edit]

Peace dove.svg

Dispute Resolution – Survey Invite


Hello Gill110951. I am currently conducting a study on the dispute resolution processes on the English Wikipedia, in the hope that the results will help improve these processes in the future. Whether you have used dispute resolution a little or a lot, now we need to know about your experience. The survey takes around five minutes, and the information you provide will not be shared with third parties other than to assist in analyzing the results of the survey. No personally identifiable information will be released.

Please click HERE to participate.
Many thanks in advance for your comments and thoughts.


You are receiving this invitation because you have had some activity in dispute resolution over the past year. For more information, please see the associated research page. Steven Zhang DR goes to Wikimania! 11:13, 5 April 2012 (UTC)

Content vs the messenger[edit]

I think you could seriously accelerate the end of the Bell's Theorem discussion by avoiding comments that refer to J. Christian as a person, and focusing on WP:Fringe as a policy. The more you talk about the researcher, the more upset they become and the longer the discussion. This issue can be resolved by WP:CON if you stop personal comments. And I do not think there is a legal issue yet, but if you continue those personal comments, those overtones will in the end appear. So it is best to avoid personal comments and focus on content and policy. History2007 (talk) 08:54, 1 June 2012 (UTC)

Thank you for your wise advice. In my defense, in my own opinion I have focussed all the time on content and on Wikipedia policies. Diether and Christian themselves raised the issue of academic qualifications and resorted to personal abuse. Concerning academic credentials: an internet search of University of Oxford web pages fails to find any evidence whatever of any academic affiliation of any kind of the author J. Christian in the University of Oxford, despite the fact that his 12 or so arxiv.org "publications" on his Bell refutation give the University of Oxford as his academic address. Personal communication with leading members of the likely departments concerned confirms this. Similarly he has had no affiliation of any kind with the Perimeter Institute for many years. Nobody would have made these enquiries (I was not the one who initiated them) if Christian had not resorted to pretty obscene personal insults to anyone who dared suggest his work was flawed, on the much read blog of Scott Aaronson (a leading expert on quantum computation). It is difficult to imagine Galileo, Kepler, or Einstein behaving like this. Aaronson started a blog entry on Christian's work (not person) in reaction to a personal challenge to Aaronson by Christian. Coinciding in time with the appearance of Christian's book. This certainly gave the book publication a great deal of publicity. Now this little Wikipedia quarrel is generating yet more publicity. Richard Gill (talk) 13:01, 1 June 2012 (UTC)
I suggest you stop trying to convince the other editors of the argument: User_talk:Thomas_h_ray#Constructive/nonexistence,_maths/metaphysics, you are unlikely to convince someone who has extreme beliefs, instead on wikipedia we solely focus on arguments based on policies and guidelines. IRWolfie- (talk) 10:18, 4 June 2012 (UTC)
Thanks. As a mathematician I also have a deeply felt conviction that mathematical truth decides arguments about mathematics, just as physicts believe Nature is the ultimate arbiter of physics. But I know it is not a criterion on Wikipedia. (I was struck by User:Count_Iblis's statements about Wikipedia policy and editing scientific topics on his Talk page.)

Anyway, I had the impression that Thomas Ray might be susceptible to mathematical arguments. He also remains polite and good humoured during heated scientific debate, in contrast to some others...

I don't like to see good people making fools of themselves. And talent being wasted. A lot of people in the quantum foundations community are really sorry for the predicament Joy has got himself into. He's widely thought to be a nice guy and he's certainly very intelligent and has many talents. But he does not take easily to criticism.

Bell's theorem is a really important topic. Very hard to get across to laypersons. It used to be squaring the circle, and perpetual motion machines, but nowadays Bell' theorem get's attention; intelligent independent minded people get fascinated and get convinced there's something wrong there. That means we scientists are not communicating well enough what it's about. A real challenge for Wikipedia. Richard Gill (talk)

Bell's theorem[edit]

This is how I explain the mathematical core of Bell's theorem to teenagers:

Consider 4N runs of a Bell-Aspect-Weihs delayed choice CHSH type experiment. Suppose that Nature is such that in each run, binary outcomes A, A', B, B' (each +/-1) can be thought to all exist alongside one another, but that only one of A and A', and only one of B and B' are actually observed - the choices being made by independent fair coin tosses, independent of the physical processes generating the 4N realizations of the four binary variables A, A', B, B' 

i.e. suppose we assume counterfactual definiteness (aka realism), locality (aka relativistic local causality), and freedom (from superdeterminsim) (aka no conspiracy).

It's easy to see that AB+AB'+A'B-A'B'=A(B+B')+A'(B-B')= +/-2 in each run. (B and B' are either different or they're equal ...)

It follows from taking averages over the 4N runs, that ave(AB)+ave(AB')+ave(A'B)-ave(A'B') lies between -2 and +2.

Finally: if N is very large, the average of AB over the runs where A and B are both observed (that's about N out of the 4N, and they're selected completely at random) will be very close to the average of AB over all 4N runs; and similarly for AB', A'B, A'B'.

If this last point is doubted, one can put numbers to "how close, with what probability" using Hoeffding's inequality for tails of the binomial distribution and of the hypergeometric distribution. It turns out that the probability that CHSH is violated by more than some amount delta is less than C exp( - D N delta^2) for certain positive constants C and D. To be precise, C = 8 and D = 1/64 will do, if we restrict delta to the interval (0,2).

The point is, everything here is discrete, finite, including the probability, which is really a counting argument, going through the 2^8N equally likely sets of different outcomes of the 8N independent fair coin tosses. Richard Gill (talk) 11:21, 4 June 2012 (UTC)

Changes to the local realism redirect[edit]

Nuvola apps edu languages.svg
Hello, Gill110951. You have new messages at Talk:Local_hidden_variable_theory#Local_realism.
Message added 16:32, 18 June 2012 (UTC). You can remove this notice at any time by removing the {{Talkback}} or {{Tb}} template.

Richard, please can you have a look[edit]

Richard, please can you have a look to what I wrote today 11 August there? Can you help with refs? Will you sign my RfC also, or do you have some other proposal? Kind regards, Gerhardvalentin (talk) 13:22, 11 August 2012 (UTC)

Invitation to comment at Monty Hall problem RfC[edit]

Because of your previous participation at Monty Hall problem, I am inviting you to comment on the following RfC:

Talk:Monty Hall problem#Conditional or Simple solutions for the Monty Hall problem?

--Guy Macon (talk) 22:54, 6 September 2012 (UTC)

Your MHP article[edit]

I took a quick look at your paper http://www.math.leidenuniv.nl/~gill/essential_MHP.pdf , which someone mentioned in the RfC. I haven't had a chance to digest it yet. But I wonder if you could point me to one thing.

You say in the abstract that your approach is based on the minimax notion from game theory. But in that case it seems to me that your (the player's) odds are always 1/3, and your optimal strategy is never to switch. Rationale: Monty, your opponent, can always limit your odds to 1/3 by the very simple strategy of never offering you a choice. However, an equally good strategy for Monty, against a perfect opponent (and a better one, against an imperfect opponent) is to offer you a choice exactly when you've already chosen the car. Clearly, any strategy in which he offers you a choice when you have not picked the car is inferior for Monty.

Since Monty cannot do better, against perfect opposition, than 1/3, you should assume he is playing one of the strategies with value 1/3, which are all ones in which he never offers you a choice unless you have already picked the car. Therefore, if he offers you a choice, you have already picked the car, and must not switch.

Can you point me to where your assumptions differ from mine, or point out a flaw in the argument? --Trovatore (talk) 04:37, 8 September 2012 (UTC)

There are many game theoretic formulations of MHP. What are the restrictions on the two parties? (Host, contestant). I like to see MHP as having four steps:
1) host hides car
2) player picks door
3) host opens another door revealing goat
4) player reconsiders pick
If these are the rules for the two parties, and the parties are allowed randomized strategies, then the host's strategy has two components: probability distribution of the initial location of the car; probability distribution of the door to open given door hiding car and door picked by contestant. I don't give the host the option not to open a door! (Otherwise it's not the MHP any more ...)
The contestant's strategy has similarly two components: probability distribution of initial door to pick; probability distribution of renewed pick given initial choice and door opened by host.
I hope you are aware of Sasha (A. V. ) Gnedin's recent publications on the decision theoretic view of MHP in which he shows how no probability whatever is needed to determine the optimal strategy for the player. He shows that any player strategy (initial door picked and rule for switching or not depending on that and on the host's opened door) is dominated by an "always switch" strategy (possibly with a different initial pick). That is to say, given any particular strategy we can easily find another strategy, but now an always-switch strategy, such that whereever the car is hidden and whatever door is opened by the host the second strategy does as well or better than the first.
A smart contestant thinks about the game before going to the show. He knows he's going to be asked whether or not he wants to switch. The above remark shows that of course he will switch, whatever... His only interesting choice is which door to choose initially. Since he's going to switch anyway it would be foolish to make this initial choice his favourite number or something like that. He's going to chuck it away, anyway. If he's smart he'll fix his initial pick completely at random, so that he won't feel uneasy about changing it later. He knows he'll get the car with probability 2/3, whatever the host does. He's not interested in the conditional probability of this or that... since it depends on things he doesn't know, and anyway, 2/3 overall win-chance can't be improved. Richard Gill (talk) 04:54, 8 September 2012 (UTC)
Ah, so you are assuming that Monty must open a door and offer you a choice, is that correct? But that is nowhere in the statement of the problem (and it is not what happened on the show, either, if I recall correctly). --Trovatore (talk) 04:59, 8 September 2012 (UTC)
Of course. It is part of the the Monty Hall problem as defined by Marilyn vos Savant (and earlier by other writers). The problem is only loosely related to the actual show. Richard Gill (talk) 05:02, 8 September 2012 (UTC)
Except that it is not part of the problem as defined by vos Savant, at least in the passage quoted at the top of the article. There is not a word about the host being required to open a door. All you know is that he did open a door.
If earlier writers were more explicit on this point, then perhaps they should be quoted at the top instead of vos Savant. The problem as stated does not have any hint of this requirement. --Trovatore (talk) 08:24, 8 September 2012 (UTC)
I'm not responsible for the confusion which is introduced at the top of the article by not immediately adding the so important "small print"! Vos Savant went on immediately to make explicit this key assumption. As a statistical brain teaser the problem starts with an article by Steve Selvin in 1975. He makes it clear, too. It's a key ingredient in the standard MHP. Richard Gill (talk) 03:48, 9 September 2012 (UTC)

Nice, but be careful[edit]

Gill, I like your recent comments on MHP talk (and support you there). But I bother that you also add your comments to the "Comments from Nijdam" section. Only Nijdam is allowed to write there. You know, "The Arbitration Committee has permitted Wikipedia administrators to impose discretionary sanctions on any editor editing this page or associated pages..." We need you alive here! :-) Boris Tsirelson (talk) 20:11, 12 September 2012 (UTC)

Oops! I had better remove them. Thanks. — Preceding unsigned comment added by Gill110951 (talkcontribs) 06:57, 13 September 2012‎ (UTC)
Removing them was good, but now you have exceeded the 500 word limit, which is not fair to everybody who stayed within the limit. See the top of Talk:Monty Hall problem#Comments from Richard Gill. The obvious solution is to stop trying to have a threaded discussion in the middle of an RfC comment, but rather to have it elsewhere on the talk page. Remember, an uninvolved administrator is going to have to go through the entire RfC and make a determination as to what the consensus is. ---Guy Macon (talk) 22:17, 13 September 2012 (UTC)
Thanks. I'll count and trim. Incidentally I think the brst thing for the page would be to ban all long time editors of the page for a year. Let fresh blood come in. Richard Gill (talk) 05:09, 14 September 2012 (UTC)

Request for you at talk:MHP[edit]

Hi - Just so you're sure not to miss it, Martin has addressed a question to you [3]. Please respond there. -- Rick Block (talk) 02:57, 22 September 2012 (UTC)

In recognition of the huge improvement in the Two Envelopes Problem[edit]

CleanupBarnstar.PNG The Cleanup Barnstar
I am astonished by how much the Two_envelopes_problem article has improved. Many mystifying or just wrong points have been removed by your edits, and clear resolutions have been put in their place.

Well done on diligently working through the issues, and thank you! Dilaudid (talk) 09:10, 3 October 2012 (UTC)

MHP: Assuming a good opponent[edit]

I vaguely recall you saying that your preferred solution is: The best strategy is to pick randomly and switch, giving you 2/3 overall chances of winning which can't be improved, I can't say anything about conditional probabilities, that's all. Is that accurate? If so, I am curious: How do you justify that view?

It seems to me that you are using game theory for this, essentially assuming that you're playing against a good opponent, and seek the best strategy against him. "Good opponent" means that nobody should be able to do better against you on average than he. But once you assume that and play the optimal strategy yourself, you have all the conditional probabilities you could ever want! How can you consistently claim not to have those? The justification for your strategy is that if you played otherwise your opponent would exploit it (if he won't, you can generally do better); or alternatively, that he might or might not exploit it (can't say) so you assume the worst (i.e. you assume he's good!). Any which way, you end up assuming a good opponent, don't you?

You could argue that you are only assuming the part of the result of such an analysis that you actually use: your own strategy. But if you do that, you have turned the result of a well-motivated analysis into an unmotivated, arbitrary assumption. Moving forward regardless, assuming "pick randomly, switch" to be optimal is equivalent to assuming the car is placed randomly. Of these two assumptions, the latter one is by far the more interesting one to make at the outset for basic MHP purposes, and once you do that, you get the Morgan 1/(1+q) solution, which you dislike.

Consequently, if you see MHP as a game theory problem, don't you end up getting a well-defined answer to the (conditional) probability of winning question as well? :) -- Coffee2theorems (talk) 15:12, 5 October 2012 (UTC)

Short answer: read my Statistica Neerlandica paper. Long answer: maybe later this weekend. Personallly I'm more comfortable with frequentist that subjective probability. By choosing my door initially at random I introduce a "hard" probability ingredient. A Bayesian would want to condition on the outcome of my randomuzation. But I'm lazy, I don't want to think. I prefer to keep my eyes shut and switch.
There is no place for randomization in Bayesian statistics. But a big place for it in Science. Richard Gill (talk) 20:32, 5 October 2012 (UTC)
I think I see now. Your claim is not that "pick randomly, then switch" is objectively the best strategy, or that 2/3 is objectively the best result (which requires "given a good opponent" qualification). It's only that without extra information, overall objective 2/3 is the best hard guarantee you can get. That way, the objective conditional probability remains unknown. It's word choices like the paper's "2/3 [is] the best you can hope for" that give the impression that optimality is claimed – unless the opponent's strategy is known, surely you can always hope! You might like to gamble and trust your best guess of the goat door, and then switch. The objective overall probability of winning that way is unknown, but might be better than 2/3, who knows?
I think a subjectivist with a uniform prior should randomize, though, to ensure you get what you believe. Or even randomize the door numbers, to ensure the uniform priors are calibrated – why not ensure that what you already believe is correct! (at least when modeled using door numbers, even if not using door locations; if you want to make side bets, you'll have a problem unless you can convince others to use the same secret random labels)
Maybe this is the most that can be said: If the opponent (host/producer) doesn't want to give you the car and plays well (K&W or Morgan), picking randomly and always switching is guaranteed to be an optimal strategy, and it wins 2/3 of the time overall. The strategy cannot be exploited even if the opponent knows it. In any particular situation, it wins 2/3 (or ≥1/2) of the time. If the opponent's strategy is unknown, then although better strategies might exist for you, overall 2/3 is the best hard guarantee you can know of, and the same strategy gives you that. In any particular situation, the (natural) epistemic probability that switching wins is 2/3, and the objective probability is unknown. -- Coffee2theorems (talk) 05:36, 10 October 2012 (UTC)
You say that I'm not claiming "pick at random then switch" is objectively the best strategy. Well, this depends on what you or I mean by "best". I claim it is best, in a very strong objective sense. It will give you the car with probability 2/3, whatever the host does. No other strategy has this guarantee, no other strategy has a better guarantee.
I don't have any objective information on the basis of which I can say what is the probability of winning by switching, given my initial choice, and given the door opened by the host. But I don't need to know this probability, I'm not interested in it. Richard Gill (talk) 13:21, 10 October 2012 (UTC)
What I meant by objectively best/optimal strategy is the literal meaning: a strategy that has the highest objective overall probability of winning. If the producer deterministically places the car behind door 1, then picking that and staying is one objectively best strategy. If the producer randomizes, then picking randomly and switching is one objectively best strategy. The latter one is also unexploitable by any opponent, and best as you define it (i.e. an equilibrium strategy). Anyhow, I think I understood your position now, and that's what I wanted to know. -- Coffee2theorems (talk) 17:45, 10 October 2012 (UTC)
Good, that's clear. For a given specification of the quizmaster's strategy there is a best strategy of the player. But we don't know it, are not told about it. Even if we observed the game many times in the past, how do we know that the host will behave the same way, tonight? All we know are the rules: the host will certainly show us a goat behind another door. "Randomize and switch" is the unique minimax strategy: gives us the best chance of winning, whatever the host does. Assuming the host doesn't want us to win, his minimax strategy is: hide the car completely at random and open a different goat door completely at random. Symmetry is at work here, too. The problem is symmetric in the door numbers. A minimax solution exists, by von Neumann's theorem. By symmetry, there exists a symmetric minimax solution. All this known since Nalebuff popularized MHP in the decision theory literature soon after Selvin did in statistics, long before Vos Savant made it famous in popular literature. Richard Gill (talk) 18:36, 10 October 2012 (UTC)

Yet another solution to MHP[edit]

Editor TotalClearance came up with the following solution to MHP. Suppose the goats are numbered Goat 1, Goat 2, and the host has a preference to reveal Goat 1. Suppose the three objects (Car, Goat 1, Goat 2) are equally likely to be arranged in any of their six permutations behind the three doors. Then we can set up a table of six equally likely possibilities as follows:

Original table as modified by Richard.

behind door 1 behind door 2 behind door 3 opened door result if staying at door #1 result if switching to the door offered
Car Goat 1 Goat 2 2 (to show Goat 1) Car Goat 2
Goat 1 Car Goat 2 3 (forced) Goat 1 Car
Goat 1 Goat 2 Car 2 (forced) Goat 1 Car
Car Goat 2 Goat 1 3 (to show Goat 1) Car Goat 2
Goat 2 Car Goat 1 3 (forced) Goat 2 Car
Goat 2 Goat 1 Car 2 (forced) Goat 2 Car

Switching gives the car in four out of the six cases. On those occasions when the host opened door 3, switching gives the car in two out of three cases. Richard Gill (talk) 08:13, 4 November 2012 (UTC)

In fact my conditional solution table is as follows with undistinguishable goats. It is an expansion of vos Savant's solution table modelling the host's alternatives to open door 2 resp. door 3, if he has a choice, completely:
behind door 1 behind door 2 behind door 3 opened door result if staying at door #1 result if switching to the door offered
Car Goat Goat 2 Car Goat
Goat Car Goat 3 Goat Car
Goat Goat Car 2 Goat Car
Car Goat Goat 3 Car Goat
Goat Car Goat 3 Goat Car
Goat Goat Car 2 Goat Car

--TotalClearance (talk) 13:42, 4 November 2012 (UTC)

OK, so there are initially only three possibilities but the two where the host has no choice are split in two (as if he toses a coin anyway, even if he has no choice) to preserve "equally likely outcomes". The table needs quite a bit of explaining. Fine for me, fine by me e.g. In a probability class, not so convenient when discussing MHP at a party. I'm personally more interested in solutions based on a few intuitive ideas only; not solutions which need algebra or arithmetic or tabulation. Richard Gill (talk) 14:30, 4 November 2012 (UTC)

Disambiguation link notification for November 19[edit]

Hi. Thank you for your recent edits. Wikipedia appreciates your help. We noticed though that when you edited Monty Hall problem, you added a link pointing to the disambiguation page Bayesian (check to confirm | fix with Dab solver). Such links are almost always unintended, since a disambiguation page is merely a list of "Did you mean..." article titles. Read the FAQ • Join us at the DPL WikiProject.

It's OK to remove this message. Also, to stop receiving these messages, follow these opt-out instructions. Thanks, DPL bot (talk) 11:40, 19 November 2012 (UTC)

My comments on the MHP talk page[edit]

Richard, I am sorry for the remarks about your complicating things on the MHP talk page. I completely misread what you wrote, somehow seeing it as saying that the presence of car next to a goat might make that goat more likely to open a door by affecting the goat in some way. When I read what you wrote again it is perfectly clear and correct. I was trying to create a case where a goat was revealed behind an unchosen door with certainty and the car was never revealed but I failed to do this properly. Martin Hogbin (talk) 18:16, 21 January 2013 (UTC)

No problem! I am agressively pushing the insights which (I believe) Bayes' rule can give to MHP. For a given specific problem one can generally find a "once off" simple solution. But if you want to understand why similar but not identical problems have different solutions, it doesn't help very much to have two rather different "once off" simple solutions which each work for one problem only. And then along comes another variant and we are back to square one. I strongly believe that Bayes' rule gives the "deep insight" to really understand what is going on, but since I am probably incapable of expressing it myself in the kind of words which everyone could easily understand, I first need to convince a wise non-mathematician expert on MHP like yourself.
Now it is true that the typical Bayes rule argument is easiest to run through when we label the doors in advance eg 1, 2, 3 and thereafter specify the labels of the doors chosen, opened.... One can afterwards "drop" the specific labels by the standard symmetry argument. Alternatively one can try to apply Bayes rule with doors only identified by their role in the problem (door chosen, door opened, etc). But it is trickier.
So I certainly am learning more about MHP myself from all these exchanges. Richard Gill (talk) 19:07, 21 January 2013 (UTC)

Combining doors and Bayes' rule[edit]

Richard, I was surprised to read your comment, 'If we realise this in advance then the combining doors argument is completely justified'. The real problem with the 'combing doors' solution is that it gives the same (and now wrong) answer for the case where the host reveals a goat by chance. This is a fundamental part of the problem, mentioned by vS right at the start and many others since. It is far more important to show why it matters that the host knows where the car is than to fuss about door numbers. Am I really the only person ever to have noticed this?

I don't see how we can apply a combined doors argument when the host reveals a goat by chance. The (verbal) combined doors argument explicitly uses the host's knowledge and deliberate decision to open a goat door. Richard Gill (talk) 14:04, 18 March 2013 (UTC)
It depends on whose argument you use. Devlin does say, ' I'll help you by using my knowledge of where the prize is to open one of those two doors to show you that it does not hide the prize. You can now take advantage of this additional information'. Even so I doubt that many people realise the significance of, 'using my knowledge of where the prize is'.
if Monty opens one of the other two doors at random and allows you to switch we can also imagine that he's offering you the opportunity to exchange your first door for the two other. By always switching (to the car door if Monty reveals a car, to the closed door if Monty reveals a goat) you'll get the car with probability 2/3. In fact you get it whenever your first door hides a goat. Now that unconditional 2/3 is the weighted average of the conditional chance of getting the car when Monty shows you the car (1) and the conditional chance of getting the car when Monty shows you a goat (p). The weights are, obviously, 1/3 and 2/3. So 1/3 + 2/3 p = 2/3 from which I deduce p = 1/2.
So we can use a "combining doors argument" too, but it doesnt't tell us directly the chance of getting the car given*what* Monty showed us. We want to know the conditional probability! And we still didn't take account of *which* door was opener. Similarly in standard MHP, the combining doors argument doesn't take account of *which* door was opened. But of course, in neither problem is this information useful. By symmetry the "information" is actually non-informative. Richard Gill (talk) 18:43, 18 March 2013 (UTC)
I am not sure what you are getting at here. There is commonly discussed scenario where the host reveals a car and the player can switch to that door.
I'm saying that you *can* use the combined doors argument in the scenario where Monty opens one of the two doors at random, and it *does* give the right answer: 2/3 unconditional. (Unconditional on whether in a particular case he happens to reveal a car or a goat, and unconditional on which door it happened to be that he opened, 2 or 3). You have to do more work to get to the conditional result: conditional on that he happened to reveal a goat behind door 3, the chance of winning by switching is 1/2. Richard Gill (talk) 08:06, 19 March 2013 (UTC)
Adams just says, ' "Monty is saying in effect: you can keep your one door or you can have the other two doors.' This argument would appear to work if host reveals a goat by chance.

Martin Hogbin (talk) 14:59, 18 March 2013 (UTC)

This is the important part of what I said. Based on what Adams says you would expect the 'combining doors' solution to work, but it does not (if the player must choose an unopened door). Martin Hogbin (talk) 18:50, 18 March 2013 (UTC)
It does work, if the host truly offers the player the choice of either of the two other doors. Richard Gill (talk) 08:07, 19 March 2013 (UTC)
Yes, of course it works if the host truly offers the player the choice of either of the two other doors - but he doesn't.
There are many possible variants of this problem but you will see from the comments we have had on the talk pages and from the points made by many sources that have discussed the problem that the one thing that puzzles most people is why it matters that the host knows where the car is (and therefore always reveals a goat). You may find it interesting to consider the variant where the player can switch to either of the two doors that he did not originally choose, including door opened by the host but I do not think anyone else does. There may be many ambiguities in vos Savant's problem statement but there is no doubt that the player is never offered the option of having the prize behind the door that the host has just opened. This option would indeed make a bizarre game show.
Under the standard rules, except that the host reveals a goat by chance (which is considered by some people to be an ambiguity in the problem statement) a player who switches gains no advantage. This is true whether or not we specify the door opened by the host. The best way, in my opinion, to explain why this difference from the standard problem exists is to use Bayes' rule. Martin Hogbin (talk) 09:42, 19 March 2013 (UTC)
Yes. And here you are following Jef Rosenthal (article, popular book chapter). Richard Gill (talk) 16:05, 19 March 2013 (UTC)
So what do you think of my suggestions on the MHP talk page? Martin Hogbin (talk) 16:52, 19 March 2013 (UTC)
I shall take a look! Richard Gill (talk) 06:47, 20 March 2013 (UTC)

Luckily, I took your advice about Bayes' rule. Ignoring door numbers, this provides a trivial proof that the 'combining doors' solution is justified, and intuitively shows how why the answer changes when the host reveals a goat by chance. Rumiton seemed to be finally convinced by this argument.

Bayes' rule also provides a simple and intuitive fix for the 'combining doors' solution when door numbers are considered significant. Do you agree? Martin Hogbin (talk) 13:56, 18 March 2013 (UTC)

I agree. I have been saying so for a number of years and even published such a fix in several places (Citizendium and StatProb online encyclopedias; also in a kind of addendum note on my webpage). See http://www.math.leidenuniv.nl/~gill/#MHP for links to everything else. Richard Gill (talk) 14:04, 18 March 2013 (UTC)

Legalities[edit]

You probably have something interesting to say about the role of statistics in law. :) Kiefer.Wolfowitz 23:07, 9 April 2013 (UTC)

I could say an awful lot on this subject! Is it a wikipedia article? Richard Gill (talk) 05:49, 10 April 2013 (UTC)
I should be happy to hear you talk on this subject. Kiefer.Wolfowitz 06:05, 10 April 2013 (UTC)

Disambiguation link notification for April 20[edit]

Hi. Thank you for your recent edits. Wikipedia appreciates your help. We noticed though that you've added some links pointing to disambiguation pages. Such links are almost always unintended, since a disambiguation page is merely a list of "Did you mean..." article titles. Read the FAQ • Join us at the DPL WikiProject.

Bayes' rule (check to confirm | fix with Dab solver)
added a link pointing to Likelihood ratio
Bayes' theorem (check to confirm | fix with Dab solver)
added a link pointing to Likelihood ratio
Bertrand's box paradox (check to confirm | fix with Dab solver)
added a link pointing to Likelihood ratio

It's OK to remove this message. Also, to stop receiving these messages, follow these opt-out instructions. Thanks, DPL bot (talk) 11:55, 20 April 2013 (UTC)

Meaning of ignoring the door numbers[edit]

Hi - Can you please comment in this thread, http://en.wikipedia.org/wiki/Talk:Monty_Hall_problem/Arguments#The_doors_are_not_necessary? Perhaps Martin might listen to you (he clearly isn't listening to me). -- Rick Block (talk) 05:58, 2 May 2013 (UTC)

Martin does not listen to me either. I thought that comparing three doors to three cups might illuminate this discussion. However, like Marilyn vos Savant, he does not see any difference. Richard Gill (talk) 08:08, 2 May 2013 (UTC)
The thread was specifically focused on the meaning of "2/3 of what" (using a concrete, rather than abstract, sample space of 900 shows). Martin seems to misunderstand the meaning (vis a vis the relevant sample space) of ignoring door numbers, apparently thinking a conclusion based on ignoring door numbers applies to a sample selected in a way that does not ignore door numbers. Is not the Bayesian's indifference (based on lack of knowledge) simply technical jargon for talking about things that are indistinguishable - in which case a representative sample space must also treat those things as indistinguishable? In particular, a Bayesian concluding there's a 2/3 chance of winning by switching for a (single) player who has picked door 1 and has then seen the host open door 3 is NOT saying the conditional probabilities the car is behind door 1 and door 2 given the host opens door 3 are 1/3 and 2/3 (in the sense that these are the expected values that will be confirmed by the law of large numbers when observing shows where players indeed pick door 1 and the host indeed opens door 3). Instead, isn't this 1/3:2/3 answer talking about a player who has picked a door (any door) and has then seen the host open a different door (either other door) - i.e. the entire sample space of 900 shows (not just those where the player picked door 1 and the host opened door 3)? I think this confusion about "2/3 of what" is perhaps at the root of much of Martin's, let's say, peculiar notions about probability. -- Rick Block (talk) 15:53, 2 May 2013 (UTC)
I see your point. But Martin never will. Regarding probability, he is a self-taught man and he is very smart. He does not have the patience to learn alternative (but conventional) ways of seeing things when he thinks he sees them perfectly well in his own way. Just like Marilyn vos Savant. Richard Gill (talk) 12:13, 3 May 2013 (UTC)

Failed to parse[edit]

I wanted to point out that recent edits you made to Bertrand's Box Paradox are resulting in a parse error. I get this error in both Chrome Version 26.0.1410.64 m and Explorer 8.0.7601. Are you seeing this error?

Failed to parse (lexing error): \frac{\text{P(see gold | GG)}{\text{P(see gold | GG)+P(see gold | SS)+P(see gold | GS)}}=\frac{1}{1+0+1/2}= \frac{2}{3}

Note that the formatting error does not occur in earlier versions, starting with: https://en.wikipedia.org/w/index.php?title=Bertrand%27s_box_paradox&oldid=551101605

The change you made on 01:58, 19 April 2013‎ seems to have introduced the problem.

--Coastside (talk) 21:29, 7 May 2013 (UTC)

I see the error. Weird. Will try to rewrite formula. Richard Gill (talk) 13:24, 8 May 2013 (UTC)

Steve Gull[edit]

FYI I stubbed Steve Gull. Glrx (talk) 21:49, 15 August 2013 (UTC)

Excellent! Steve tells me his office is in 18 boxes while he moves across the road in Cambridge but I have the idea he does know exactly where, in there, the two pages are which I mentioned on Bell's Theorem. Richard Gill (talk) 14:29, 16 August 2013 (UTC)
The faster he moves, the less sure he is of the position. Glrx (talk) 18:54, 16 August 2013 (UTC)
Maybe the slower he moves, the less sure he becomes. Richard Gill (talk) 09:55, 17 August 2013 (UTC)

Question about Gull proof[edit]

Professor Gill, thanks to you and Professor Gull for posting the sketch proof regarding Bell's theorem. ([4]) To help in understanding it, I've attempted to restate it as follows:

(1) A deterministic computer program that is intended to duplicate the results of QM implies the existence of a function p1(polarizer angle, trial number).

(2) QM implies that the probability p2 of correlation of measurements of polarizations made with polarizers set at two different angles equals 1/4(1 - cos(difference in angles)).

(3) The functions p1 and p2 must be equal if the program is to duplicate the results of QM.

(4) The Fourier transform of p1 in the trial number domain will be an infinite series of randomly varying 1's and 0's.

(5) The Fourier transform of p2 has only three non-zero components.

Conclusion: The Fourier transforms are not equal, so there is no such program that can duplicate the results of QM.

Is this an accurate restatement of the proof? If so, why is (3) above true?J-Wiki (talk) 20:10, 28 September 2013 (UTC)

It's a bit more complicated and actually I think Steve Gull has missed something, which is important, but fortunately can be fixed.

Consider one run. The detectors have to give identical outcomes when set to the same angles. So the information sent from the source to each detector must be a definite instruction, for each detector setting theta, to give an outcome +1 or -1. The instruction must be the same for both detectors. Let me denote the instruction by a function f(theta), theta in [0,2pi], taking values in {-1,+1}. Suppose now, in one run, Alice uses angle theta and Bob uses angle theta+delta, where theta is chosen uniformly between zero and 2pi. The correlation between Alice and Bobs' outcomes is rho(delta_ = int_0^2pi f(theta)f(theta+delta) d theta / 2 pi. Here I am thinking of the instruction function f being defined for angles outside 0,2pi by extending it periodically.

The formula for rho says that the correlation function rho is the convolution of the functions f and g where g is f mirrored about zero (g(x)=f(-x). The Fourier transform of a mirrored function is the complex conjugate of the original function, and the Fourier transform of a convolution is a product. Therefore FT(rho)=|FT(f)|^2 in other words, the Fourier coefficients of rho are the squares of the absolute values of f.

So you see I think that Steve did not quite tell us everything: he is adding a random rotation between 0 and 2 pi before defining correlations. But it is legitimate since the computer programs could be use to simulate this experiment.

His computer program would actually create a possibly different function f in each run. The observed correlation would be the average of the correlation observed in many runs. We should now think of the function f as being a random function. But still, each realization f has a Fourier transform, and the Fourier transfrom of rho is the average of the Fourier transforms for each f. The so-many'th Fourier coefficient of rho must be the average of the absolute value squared of the same coefficient of f.

Again there is a conceptual step missing in Steve's outline: different instruction functions f in each run of the experiment. The computer program would use a random generator to make a different f each time.

I've written to Steve with these comments. Richard Gill (talk) 05:07, 29 September 2013 (UTC)

Professor Gill, thanks very much for your explanation and comments. However, because it is part of the deterministic program, the random generator would necessarily be a pseudorandom number generator, and thus cyclic. Is this OK? Otherwise, if it must be truly random, of course the only source for this is QM... J-Wiki (talk) 02:32, 2 October 2013 (UTC)
Good question. Indeed the computer program probably uses some deterministic rule to create so-called pseudo-random numbers. Eventually the cycle will repeat. The starting point of the simulations will be determined by the program from the computer time and date, or it will be chosen by the programmer by using his birthdate or lucky number to pick a point in the cycle. Let's think of the initial seed of the random generator as random. Then the first N numbers from the generator are jointly random though of course the second, third, .... up to the Nth are just some complicated function of the previous numbers. A good pseudo random generator is such that if we pick the seed at random and then look at the first N numbers, call them U1, ... , UN, these are close in joint distribution to N independent random uniform (0,1) numbers. In usual applications N is much much much smaller than the cycle length of the generator!
So I think that from a practical point of view this is OK. From a metaphysical point of view there are interesting questions about randomness. Of course when we only simulate a Bell-type experiment N times there is a chance that we will violate Bell inequalities "purely by chance". The usual proofs of Bell's theorem talk about expectation values ie averages after infinitely many runs of the experiment. In real experiments we only observe finitely any pairs of particles and we use sample averages as proxy for population means. I recently wrote some new proofs of Bell's theorem which take account of "finite statistics". The final conclusion is a probability statement ... the chance of such and such a large violation of Bell's inequality is smaller than something very tiny, if the number of runs so and so large. Richard Gill (talk) 09:43, 2 October 2013 (UTC)
Professor, thank you for the analysis. J-Wiki (talk) 01:07, 4 October 2013 (UTC)

License tagging for File:Bell.svg[edit]

Thanks for uploading File:Bell.svg. You don't seem to have indicated the license status of the image. Wikipedia uses a set of image copyright tags to indicate this information.

To add a tag to the image, select the appropriate tag from this list, click on this link, then click "Edit this page" and add the tag to the image's description. If there doesn't seem to be a suitable tag, the image is probably not appropriate for use on Wikipedia. For help in choosing the correct tag, or for any other questions, leave a message on Wikipedia:Media copyright questions. Thank you for your cooperation. --ImageTaggingBot (talk) 20:05, 22 December 2013 (UTC)

A barnstar for you![edit]

Special Barnstar Hires.png The Special Barnstar
Thank you for your exceptional work on Bell's Theorem, for your effort as a specialist when the article was in need of expert attention! The encyclopedia received a great benefit through your contribution. Thank you for taking the time to discuss with other editors, ask for advice and listen their concerns! This makes you an example for the community.

Thank you for making Wikipedia a better place to be! Alma (talk) 20:55, 27 December 2013 (UTC)

Hear hear, even though we may not agree about absolutely everything. Martin Hogbin (talk) 11:08, 24 March 2014 (UTC)
Wouldn't that be boring? Thanks! Richard Gill (talk) 11:36, 24 March 2014 (UTC)

Practical experiments testing Bell's theorem[edit]

In this article, seems like you meant \cos(2\theta) instead of \cos(\theta/2).  wolfRAMM  17:24, 21 March 2014 (UTC)

Mistake, indeed; thank you. Boris Tsirelson (talk) 21:06, 21 March 2014 (UTC)
Thank you, both! Richard Gill (talk) 10:42, 24 March 2014 (UTC)