Sleeping Beauty problem

From Wikipedia, the free encyclopedia
  (Redirected from Sleeping Beauty Problem)
Jump to: navigation, search

The Sleeping Beauty problem is a puzzle in probability theory and formal epistemology in which an ideally rational epistemic agent is to be woken once or twice according to the toss of a coin, and asked her degree of belief for the coin having come up heads.

The problem was originally formulated in unpublished work by Arnold Zuboff (this work was later published as "One Self: The Logic of Experience"[1]), followed by a paper by Adam Elga[2] but is based on earlier problems of imperfect recall and the older "paradox of the absentminded driver". The name Sleeping Beauty for the problem was first used in extensive discussion in the Usenet newsgroup rec.puzzles in 1999.[3]

The problem[edit]

Sleeping Beauty volunteers to undergo the following experiment and is told all of the following details: On Sunday she will be put to sleep. Once or twice, during the experiment, Beauty will be wakened, interviewed, and put back to sleep with an amnesia-inducing drug that makes her forget that awakening. A fair coin will be tossed to determine which experimental procedure to undertake: if the coin comes up heads, Beauty will be wakened and interviewed on Monday only. If the coin comes up tails, she will be wakened and interviewed on Monday and Tuesday. In either case, she will be wakened on Wednesday without interview and the experiment ends.

Any time Sleeping Beauty is wakened and interviewed, she is asked, "What is your belief now for the proposition that the coin landed heads?"

Solutions[edit]

This problem continues to produce ongoing debate.

Thirder position[edit]

The thirder position argues that the probability of heads is 1/3. Adam Elga argued for this position originally[2] as follows: Suppose Sleeping Beauty is told and she comes to fully believe that the coin landed tails. By even a highly restricted principle of indifference, her credence that it is Monday should equal her credence that it is Tuesday since being in one situation would be subjectively indistinguishable from the other. In other words, P(Monday | Tails) = P(Tuesday | Tails), and thus

P(Tails and Tuesday) = P(Tails and Monday).

Consider now that Sleeping Beauty is told upon awakening and comes to fully believe that it is Monday. She knows the experimental procedure doesn't require the coin to actually be tossed until Tuesday morning, as the result only affects what happens after the Monday interview. Guided by the objective chance of heads landing being equal to the chance of tails landing, it should therefore hold that P(Tails | Monday) = P(Heads | Monday), and thus

P(Tails and Tuesday) = P(Tails and Monday) = P(Heads and Monday).

Since these three outcomes are exhaustive and exclusive for one trial, the probability of each is one-third by the previous two steps in the argument.

Another argument is based on long-run average outcomes. Suppose this experiment were repeated 1,000 times. It is expected that there would be 500 heads and 500 tails. So Beauty would be awoken 500 times after heads on Monday, 500 times after tails on Monday, and 500 times after tails on Tuesday. In other words, only in one-third of the cases would heads precede her awakening. This long-run expectation should give the same expectations for the one trial, so P(Heads) = 1/3.

Nick Bostrom argues that the thirder position is implied by the Self-Indication Assumption.

Halfer position[edit]

David Lewis responded to Elga's paper with the position that Sleeping Beauty's credence that the coin landed heads should be 1/2.[4] Sleeping Beauty receives no new non-self-locating information throughout the experiment because she is told the details of the experiment. Since her credence before the experiment is P(Heads) = 1/2, she ought to continue to have a credence of P(Heads) = 1/2 since she gains no new relevant evidence when she wakes up during the experiment. This directly contradicts one of the thirder's premises, since it means P(Tails | Monday) = 1/3 and P(Heads | Monday) = 2/3.

Nick Bostrom argues that Sleeping Beauty does have new evidence about her future from Sunday: "that she is now in it," but does not know whether it is Monday or Tuesday, so the halfer argument fails.[5]

Double Halfer position[edit]

The double halfer position [6] argues that both P(Heads) and P(Heads | Monday) equal 1/2. Mikal Cozic,[7] in particular, argues that context-sensitive propositions like "it is Monday" are in general problematic for conditionalization and proposes the use of an imaging rule instead, which supports the double halfer position.

Phenomenalist position[edit]

The phenomenalist position argues that Sleeping Beauty's credence is meaningless until it is attached to consequences. Suppose Sleeping Beauty is asked, not to give her credence, but her guess, and if she guesses right, she wins some money. If she wins some money for each correct guess, then she should guess tails, and her position is similar to the thirder position. If she wins some money only for a correct guess on Monday, then she should be indifferent, and her position is similar to the halfer position.[8]

Variations[edit]

The days of the week are irrelevant, but are included because they are used in some expositions. A non-fantastical variation called The Sailor's Child has been introduced by Radford Neal. The problem is sometimes discussed in cosmology as an analogue of questions about the number of observers in various cosmological models.

The problem does not necessarily need to involve a fictional situation. For example, computers can be programmed to act as Sleeping Beauty and not know when they are being run; consider a program that is run twice after tails is flipped and once after heads is flipped.

Extreme Sleeping Beauty[edit]

This differs from the original in that there are one million and one wakings if tails comes up. It was formulated by Nick Bostrom.

See also[edit]

References[edit]

  1. ^ One Self: The Logic of Experience by Arnold Zuboff
  2. ^ a b Elga, A. (2000) Self-locating Belief and the Sleeping Beauty Problem, Analysis, 60, 143-147
  3. ^ http://www.maproom.co.uk/sb.html
  4. ^ Lewis, D., 2001. Sleeping Beauty: reply to Elga. Analysis, 61(271), 171-76.
  5. ^ Bostrom, N., 2007. Sleeping Beauty and Self-Location: A Hybrid Model. Synthese, 157(1), 59-78. www.anthropic-principle.com/preprints/beauty/synthesis.pdf
  6. ^ Meacham, C. J. (2008). Sleeping beauty and the dynamics of de se beliefs. Philosophical Studies, 138(2), 245-269.
  7. ^ Mikaël Cozic, Imaging and Sleeping Beauty: A case for double-halfers, International Journal of Approximate Reasoning, Volume 52, Issue 2, February 2011, Pages 137-143
  8. ^ If a tree falls on Sleeping Beauty

Other works discussing the Sleeping Beauty problem[edit]

  • Arntzenius, F. (2002) Reflections on Sleeping Beauty, Analysis, 62-1, 53-62
  • Bostrom, Nick (2002-07-12). Anthropic Bias. Routledge (UK). pp. 195–96. ISBN 0-415-93858-9. 
  • Bruce, Colin (2004-12-21). Schrodinger's Rabbits: Entering the Many Worlds of Quantum. Joseph Henry Press. pp. 193–96. ISBN 0-309-09051-2. 
  • Bradley, D. (2003) Sleeping Beauty: a note on Dorr's argument for 1/3, Analysis, 63, 266-268
  • Dorr, C. (2002) Sleeping Beauty: in Defence of Elga, Analysis, 62, 292-296
  • Elga, A. (2000) Self-locating Belief and the Sleeping Beauty Problem, Analysis, 60, 143-147
  • Lewis, D. (2001) Sleeping Beauty: Reply to Elga, Analysis, 61, 171-176
  • Meacham, C. (forthcoming) Sleeping Beauty and the Dynamics of De Se Beliefs, Philosophical Studies
  • Monton, B. (2002) Sleeping Beauty and the Forgetful Bayesian, Analysis, 62, 47-53
  • R. Neil, Puzzles of Anthropic Reasoning Resolved Using Full Non-indexical Conditioning, preprint
  • Zuboff, M. (1990) One Self: The Logic of Experience', Inquiry, 33, 39-68
  • Titelbaum, M. (2013) Quitting Certainties, 210-229,233-237,241-249,250,276-277

External links[edit]