Three Prisoners problem

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

The Three Prisoners problem appeared in Martin Gardner's "Mathematical Games" column in Scientific American in 1959.[1][2] It is mathematically equivalent to the Monty Hall problem with car and goat replaced respectively with execution and freedom.

Problem[edit]

Three prisoners, A, B, and C, are in separate cells and sentenced to death. The governor has selected one of them at random to be pardoned. The warden knows which one is pardoned, but is not allowed to tell. Prisoner A begs the warden to let him know the identity of one of the two who are going to be executed. "If B is to be pardoned, give me C's name. If C is to be pardoned, give me B's name. And if I'm to be pardoned, secretly flip a coin to decide whether to name B or C."

The warden tells A that B is to be executed. Prisoner A is pleased because he believes that his probability of surviving has gone up from 1/3 to 1/2, as it is now between him and C. Prisoner A secretly tells C the news, who reasons that A's chance of being pardoned is unchanged at 1/3, but he is pleased because his own chance has gone up to 2/3. Which prisoner is correct?

Solution[edit]

The answer is that prisoner A did not gain any information about his own fate, since he already knew that the warden would give him the name of someone else. Prisoner A, prior to hearing from the warden, estimates his chances of being pardoned as 1/3, the same as both B and C. As the warden says B will be executed, it is either because C will be pardoned (1/3 chance), or A will be pardoned (1/3 chance) and the coin to decide whether to name B or C the warden flipped came up B (1/2 chance; for an overall 1/2 × 1/3 = 1/6 chance B was named because A will be pardoned). Hence, after hearing that B will be executed, the estimate of A's chance of being pardoned is half that of C. This means his chances of being pardoned, now knowing B is not, again are 1/3, but C has a 2/3 chance of being pardoned.

Table[edit]

The explanation above may be summarised in the following table. As the warden is asked by A, he can only answer B or C to be executed (or "not pardoned").

Being pardoned Warden: "not B" Warden: "not C" Sum
A 1/6 1/6 1/3
B 0 1/3 1/3
C 1/3 0 1/3

As the warden has answered that B will not be pardoned, the solution comes from the second column "not B". It appears that the odds for A vs. C to be pardoned are 1:2.

Mathematical formulation[edit]

Call , and the events that the corresponding prisoner will be pardoned, and the event that the warden tells A that prisoner B is to be executed, then, using Bayes' theorem, the posterior probability of A being pardoned, is:[3]

The probability of C being pardoned, on the other hand, is:

The crucial difference making A and C unequal is that but . If A will be pardoned, the warden can tell A that either B or C is to be executed, and hence ; whereas if C will be pardoned, the warden can only tell A that B is executed, so .

An intuitive explanation[edit]

Prisoner A only has a 1/3 chance of pardon. Knowing whether B or C will be executed does not change his chance. After he hears B will be executed, Prisoner A realizes that if he will not get the pardon himself it must only be going to C. That means there is a 2/3 chance for C to get a pardon. This is comparable to the Monty Hall problem.

Enumeration of possible cases[edit]

The following scenarios may arise:

  1. A is pardoned and the warden mentions B to be executed: 1/3 × 1/2 = 1/6 of the cases
  2. A is pardoned and the warden mentions C to be executed: 1/3 × 1/2 = 1/6 of the cases
  3. B is pardoned and the warden mentions C to be executed: 1/3 of the cases
  4. C is pardoned and the warden mentions B to be executed: 1/3 of the cases

With the stipulation that the warden will choose randomly, in the 1/3 of the time that A is to be pardoned, there is a 1/2 chance he will say B and 1/2 chance he will say C. This means that taken overall, 1/6 of the time (1/3 [that A is pardoned] × 1/2 [that warden says B]), the warden will say B because A will be pardoned, and 1/6 of the time (1/3 [that A is pardoned] × 1/2 [that warden says C]) he will say C because A is being pardoned. This adds up to the total of 1/3 of the time (1/6 + 1/6) A is being pardoned, which is accurate.

It is now clear that if the warden answers B to A (1/2 of all cases), then 1/3 of the time C is pardoned and A will still be executed (case 4), and only 1/6 of the time A is pardoned (case 1). Hence C's chances are (1/3)/(1/2) = 2/3 and A's are (1/6)/(1/2) = 1/3.

The key to this problem is that the warden may not reveal the name of a prisoner who will be pardoned. If we eliminate this requirement, it can demonstrate the original problem in another way. The only change in this example is that prisoner A asks the warden to reveal the fate of one of the other prisoners (not specifying one that will be executed). In this case, the warden flips a coin and chooses one of B and C to reveal the fate of. The cases are as follows:

  1. A pardoned, warden says: B executed (1/6)
  2. A pardoned, warden says: C executed (1/6)
  3. B pardoned, warden says: B pardoned (1/6)
  4. B pardoned, warden says: C executed (1/6)
  5. C pardoned, warden says: B executed (1/6)
  6. C pardoned, warden says: C pardoned (1/6)

Each scenario has a 1/6 probability. The original Three Prisoners problem can be seen in this light: The warden in that problem still has these six cases, each with a 1/6 probability of occurring. However, the warden in the original case cannot reveal the fate of a pardoned prisoner. Therefore, in case 3 for example, since saying "B is pardoned" is not an option, the warden says "C is executed" instead (making it the same as case 4). That leaves cases 4 and 5 each with a 1/3 probability of occurring and leaves us with the same probability as before.

Why the paradox?[edit]

The tendency of people to provide the answer 1/2 is likely due to a tendency to ignore context that may seem unimpactful. For example, how the question is posed to the warden can affect the answer. This can be shown by considering a modified case, where and everything else about the problem remains the same.[3] Using Bayes' Theorem once again:

However, if A simply asks if B will be executed, and the warden responds with "yes", the probability that A is pardoned becomes:

[3]

A similar assumption is that A plans beforehand to ask the warden for this information. A similar case to the above arises if A does not plan to ask the warden anything and the warden simply informs him that he will be executing B.[4]

Another likely overlooked assumption is that the warden has a probabilistic choice. Let us define as the conditional probability that the warden will name B given that C will be executed. The conditional probability can be then expressed as:

If we assume that , that is, that we do not take into account that the warden is making a probabilistic choice, then . However, the reality of the problem is that the warden is flipping a coin (), so .[4]

Judea Pearl (1988) used a variant of this example to demonstrate that belief updates must depend not merely on the facts observed but also on the experiment (i.e., query) that led to those facts.[5]

Related problems and applications[edit]

References[edit]

  1. ^ Gardner, Martin (October 1959). "Mathematical Games: Problems involving questions of probability and ambiguity". Scientific American. 201 (4): 174–182. doi:10.1038/scientificamerican1059-174.
  2. ^ Gardner, Martin (1959). "Mathematical Games: How three modern mathematicians disproved a celebrated conjecture of Leonhard Euler". Scientific American. 201 (5): 188. doi:10.1038/scientificamerican1159-181.
  3. ^ a b c Shimojo, Shinsuke; Ichikawa, Shin'Ichi (August 1990). "Intuitive reasoning about probability: Theoretical and experimental analyses of the "problem of three prisoners"". Cognition. 36 (2): 205. doi:10.1016/0010-0277(89)90012-7.
  4. ^ a b Wechsler, Sergio; Esteves, L. G.; Simonis, A.; Peixoto, C. (February 2005). "Indifference, Neutrality and Informativeness: Generalizing the Three Prisoners Paradox". Synthese. 143 (3): 255–272. Retrieved 15 December 2021.
  5. ^ Pearl, J. (1988). Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference (First ed.). San Mateo, CA: Morgan Kaufmann.

Further reading[edit]