Jump to content

Talk:Newcomb's paradox

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 74.211.60.216 (talk) at 04:21, 15 October 2018 (→‎Paradox requirements). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Please add {{WikiProject banner shell}} to this page and add the quality rating to that template instead of this project banner. See WP:PIQA for details.
WikiProject iconPhilosophy: Logic Start‑class Low‑importance
WikiProject iconThis article is within the scope of WikiProject Philosophy, a collaborative effort to improve the coverage of content related to philosophy on Wikipedia. If you would like to support the project, please visit the project page, where you can get more details on how you can help, and where you can join the general discussion about philosophy content on Wikipedia.
StartThis article has been rated as Start-class on Wikipedia's content assessment scale.
LowThis article has been rated as Low-importance on the project's importance scale.
Associated task forces:
Taskforce icon
Logic
Please add {{WikiProject banner shell}} to this page and add the quality rating to that template instead of this project banner. See WP:PIQA for details.
WikiProject iconGame theory Start‑class Low‑importance
WikiProject iconThis article is part of WikiProject Game theory, an attempt to improve, grow, and standardize Wikipedia's articles related to Game theory. We need your help!
Join in | Fix a red link | Add content | Weigh in
StartThis article has been rated as Start-class on Wikipedia's content assessment scale.
LowThis article has been rated as Low-importance on the importance scale.

Paradox requirements

This is not a paradox because there is only one possible outcome based on the definition of the problem. Bensaccount 03:48, 25 Mar 2004 (UTC)

A paradox leads logically to self contradiction. This does no such thing. Only illogical argument with the problem itself will lead to contradiction. The problem leads only to a single final outcome. Bensaccount 04:04, 25 Mar 2004 (UTC)

This is indeed a paradox as two widely -0principles of decision making (Expected Utility and Dominance) contradict one another as to which is the best decision.

Kaikaapro 00:18, 13 September 2006 (UTC)[reply]

There is no contradiction. Chosing both boxes gives you $1000, chosing B only gives you $1000000. No contradiction. It's only counter-intuitive concept of backward causality which fools some people to argue for taking both boxes.--88.101.76.122 (talk) 17:10, 6 April 2008 (UTC)[reply]
Uh, no. There is no stipulation that choosing B only gives you $1000000. That only follows if the past success of the predictor necessitates the predictor's future success, but there is no such necessity. If backwards causality of the sort posited here is logically impossible (and I believe it is), then it is more likely that the predictor will fail this time, regardless of how unlikely that is. -- 98.108.225.155 (talk) 07:34, 22 November 2010 (UTC)[reply]
Kaikaapro, there is no paradox; the apparent clash merely indicates that the expected utility is being miscalculated by assigning an incorrect Bayesian value to the predictor's assessment. Regardless of what the predictor has done in the past, dominance assures us that we still benefit by taking both boxes. -- 98.108.225.155 (talk) 07:34, 22 November 2010 (UTC)[reply]

It isn't a paradox in a logical sense, though it would appear to be counter-intuitive to many people, which would lead them to assume a paradox was involved rather than faulty assumptions. The accuracy of the predictions is outlined in the problem, and should be assumed. Effectively the choice made is the prediction made. Even allowing for some error (almost always correct) then it would still pay off to assume 100% accuracy anyway. Ninahexan (talk) 02:25, 11 January 2011 (UTC)[reply]

Here is the problem properly stated. The superbeing Omega has been running this experiment, and has so far predicted each person's choice accurately. You are shown two boxes. One is see through, and has 1000$ in it. The other is opaque. Omega tells you this. "You may pick both boxes, or just box B. I have made a prediction about what you will chose. If I predicted you will take both boxes, then box B is empty. If I predicted that you will only take box B, then box B has 1 million dollars in it. I cannot change the contents of the boxes now. Make your choice." One argument is "Every person who has picked just B has gotten 1 million, and every person who has picked both has gotten 1000, so the choice is obvious. I should be one of those who picked B." The other argument is "No matter what Omega's prediction was, I will get more money if i pick both boxes, therefore I should pick both." Note that Omega being infallible is not an assumption of the problem, but there have been zero failures so far. 74.211.60.216 (talk) 04:21, 15 October 2018 (UTC)[reply]

Whether the paradox is real

After several days research and thought, I am firmly convinced 1) that this is a paradox with a non-trivial analysis and 2) that the original version (while imperfect) was closer to NPOV than the current version.

Bensaccount's primary complaint seems to be that because "reverse causation is defined into the problem" there is only one solution. However, free will is also "defined into the problem" - otherwise Chooser is not really making a choice. Using Bensaccount's framework, we have two mutually incompatible conclusions yet neither of the premises (free will and the ability to predict the future) can be easily or obviously dismissed as untrue.

Ok well proven, I didnt see that before. I stand corrected. Bensaccount 00:54, 1 Apr 2004 (UTC)
These two things don't contradict each other. Either you will freely decide to take both boxes and your decision will cause box B to be empty. Or, you'll freely decide to take only box B, and you will cause box to contain one million.--88.101.76.122 (talk) 17:13, 6 April 2008 (UTC)[reply]

Is it not logical to suggest that the predictor is the one lacking free will, having their choice dictated by the free will of the future chooser? Ninahexan (talk) 02:29, 11 January 2011 (UTC)[reply]

A strange variation

Long article and posts, I searched but did not read exhaustively. The following may be redundant.

I heard of a variation on this problem. Assume you have a friend standing on the other side of the table. Box B is also transparent (on the friend's side - you still can't see the contents). Your friend WANTS you to take both boxes.

What do you do? 67.172.122.167 (talk) 05:47, 31 July 2011 (UTC)[reply]

Oops. Aaaronsmith (talk) 05:49, 31 July 2011 (UTC)[reply]

Random device?

Just responding to this paragraph of the article:

If the player believes that the predictor can correctly predict any thoughts he or she will have, but has access to some source of random numbers that the predictor cannot predict (say, a coin to flip, or a quantum process), then the game depends on how the predictor will react to (correctly) knowing that the player will use such a process. If the predictor predicts by reproducing the player's process, then the player should open both boxes with 1/2 probability and will receive an average of $251,000; if the predictor predicts the most probable player action, then the player should open both with 1/2 - epsilon probability and will receive an average of ~$500,999.99; and if the predictor places $0 whenever they believe that the player will use a random process, then the traditional "paradox" holds unchanged.

This is all a bit tricky. Firstly, talking about a "coin" is misleading, since an (ideal) coin always has probability 1/2, but the writer is talking about what the player should do if they have access to a random device that can be set to decide between two outcomes with any desired probability. (This had me confused for a long while!)

In case 1 (the predictor replicates the process) then if you select a 50/50 probability, the expected value of payout is a straight average of all four possibilities ($500,500). (The writer's figure of $251,000 would be correct if your choices were Box A or both boxes. They are Box B or both.) This doesn't circumvent the paradox at all, though: choosing to open both boxes is superior to using the random device for the same reason as before (whatever is in Box B, you get more if you take both than if you don't) and choosing to open one box is superior to using the random device, since it gives a higher expected payout.

Case 2 is unclearly written, but I think the writer is saying "what if" the predictor responds to randomness by always going for the more likely outcome. In this case, setting the device to choose both boxes with probability 0.5 minus epsilon (where epsilon is a very small quantity) means there will always be a million in Box B. The average payout would be just under $1,000,500. (Again, the figure given would be correct if the choice were between Box A or both boxes.) This would be clearly the optimum strategy, and so there would be no paradox, if the predictor did indeed work like that.

But I don't see why we should be entitled to assume that the predictor can predict thoughts but not the outcome of a random device. If we simply stipulate that the predictor is capable of predicting the ACTUAL decision by unspecified means, then even mentioning a random device achieves nothing. The paradox remains: "You have two possible decisions, either of which can be shown by seemingly reasonable argument to be superior to the other."

Am I right? 2.25.135.6 (talk) 18:34, 18 December 2011 (UTC)[reply]

Well, if we accept that the brain is an inherently random device, then the problem is solved rather trivially: Take both boxes, because you're using a random device to make the desicion, so box B will be empty. But now we're not a random device, because we always give the same answer. But we still can't pick just box B. This could probably also turn into a trust problem. 176.35.126.251 (talk) 09:37, 16 September 2013 (UTC)[reply]

Christopher Michael Langan

Chris Langan has also proposed a solution to the problem. This was published in Noesis (number 44, December 1989 - January 1990). Noesis was the Journal of the Noetic Society. — Preceding unsigned comment added by 89.9.208.63 (talk) 21:46, 18 December 2012 (UTC)[reply]

 Done--greenrd (talk) 08:30, 25 February 2013 (UTC)[reply]

Imagine a world in which all couples have four children. After the (genX) mother reaches menopause, the government arrests three of their (genX+1) children, and either secretly kills them or doesn't. It then tells the (genX) parents that it has predicted whether they will kill their fourth (genX+1) child: if their (genX-1) parents killed one of their (genX) siblings, then the government will have predicted the (genX) couple will behave like their parents and kill their fourth (genX+1) child and the government will therefore release the other three. Conversely, if their (genX-1) parents did not kill one of their (genX) siblings, then the government will have predicted the (genX) couple will behave like their parents and will not kill their fourth (genX+1) child and the government has therefore already killed the other three.

Assuming the couple wants to perpetuate their genes, it is actually logical for them to kill the fourth child. If the government released the other three children, it will therefore use this choice to predict their children's behaviour and will not kill the genX+2 children. Each generation will contain 150% of the genes of the previous one.

If the couple refuses to kill the fourth child, the government will kill all but one of his children, so their genes will gradually die out.

John Blackwell (talk) 17:06, 25 June 2013 (UTC)[reply]

This is stupid. How does it even get started? This is like a loop in computer programming with no beginning. It doesn't even make sense.--greenrd (talk) 11:09, 22 September 2013 (UTC)[reply]

A clear definition of "optimal"

Does Newcomb provide a clear definition of "optimal"? It's explicit that we seek to determine which of two strategies is optimal, but there are multiple valid definitions of "optimal":

  • Maximising the minimum guaranteed amount of money gained
  • Maximising the expected amount of money gained
  • Maximising the maximum possible amount of money gained

87.113.40.254 (talk) 18:55, 5 July 2013 (UTC)[reply]

Neither strategy is optimal, but the first is very flawed

I see something like a paradox here, but it's not between the two listed strategies. Rather, the first strategy is flawed:

That is, if the prediction is for both A and B to be taken, then the player's decision becomes a matter of choosing between $1,000 (by taking A and B) and $0 (by taking just B), in which case taking both boxes is obviously preferable. But, even if the prediction is for the player to take only B, then taking both boxes yields $1,001,000, and taking only B yields only $1,000,000—taking both boxes is still better, regardless of which prediction has been made.

This strategy neglects two crucial details:

  • The predictor's decision will be influenced by the player's decision.
    • Specifically, the predictor's decision will match the player's decision with a high probability ("almost certain").
  • The predictor's decision will, in turn, influence the maximum possible prize.

In fact, considering that last point, it seems that Newcomb's problem is a bit like a one-sided version of the Prisoner's Dilemma.

Consider if the predictor is Laplace's demon. In this case:

  • Choosing box B has an expected value of $1,000,000.
  • Choosing both has an expected value of $1,000.

In this case, the second strategy (always choose B) is clearly superior.

This raises two issues, however:

  • The issue of 'free will' (which Laplace's demon precludes the existence of.)
  • Universes where a perfect predictor cannot exist (due to e.g. a lack of time travel and a surplus of quantum mechanics)

Most of the 'free will' concern is not really an issue: a rational player often sacrifices some or all of their free will in order to maximize the expected value. For example, consider the Monty Hall problem: a rational player will always switch (regardless of what they consider maximum.)

Things get more complicated in universes like ours, where a perfect predictor cannot exist.

Let us consider another predictor that is always wrong (Laplace's angel?). In this case:

  • Choosing box B has an expected value of $0.
  • Choosing both has an expected value of $1,001,000.

In this case, the first strategy (always choose both) is clearly superior.

It should be evident by now that the best strategy depends on the accuracy of the predictor.

Let P(C|B) represent the probability that the predictor was correct, given that the player chose only box B. Let P(C|AB) represent the probability that the predictor was correct, given that the player chose both boxes.

If the player chooses box B, the expected outcome is P(C|B) * $1,000,000. If the player chooses boxes A+B, the expected outcome is (1 - P(C|AB)) * $1,000,000 + $1,000.

It should be readily apparent that the "perfect predictor" is a special case where P(C) = 1.

Therefore, the best meta-strategy is one of these two:

  • Choose a strategy that maximizes P(C|B) and chooses box B only.
  • Choose a strategy that minimizes P(C|AB) and chooses both boxes.

Which of these two is superior depends on the maximum achievable P(C|B) and the minimum achievable P(C|AB).

In fact, neither of the proposed strategies is truly optimal. They are too simple.

For example, if the predictor's accuracy is very high, an excellent strategy would be this:

  1. Arrange for a friend to call you after the brain scan, but before you make your choice, and say "Stop! I did the math again, and you should choose both boxes."
  2. Arrange for your friend to tell you "The best bet is to choose box B only" after the next step.
  3. Erase or block the memory of the previous two steps (lots of alcohol may help)

The only "paradox" I see is that, barring trickery like the above:

  • A completely rational player will always choose the best available strategy and stick with it, and is therefore very predictable, maximizing P(C). Therefore, such a player is better off choosing box B only.
  • A non-rational player (one with free will) may change strategies in the absence of new information, minimizing P(C). Therefore, such a player is better off choosing box A+B.
    • However, by trying to choose a strategy, such a player becomes rational, and thus more predictable -- and the strategy of choosing A+B yields a lower expected outcome than the strategy of choosing B. — Preceding unsigned comment added by Stevie-O (talkcontribs) 18:16, 14 January 2014 (UTC)[reply]

What a pedantic opening paragraph

The section titled The Problem currently begins with this paragraph:

A person is playing a game operated by the Predictor, an entity somehow presented as being exceptionally skilled at predicting people's actions. The exact nature of the Predictor varies between retellings of the paradox. Some assume that the character always has a reputation for being completely infallible and incapable of error; others assume that the predictor has a very low error rate. The Predictor can be presented as a psychic, a superintelligent alien, a deity, a brain-scanning computer, etc. However, the original discussion by Nozick says only that the Predictor's predictions are "almost certainly" correct, and also specifies that "what you actually decide to do is not part of the explanation of why he made the prediction he made". With this original version of the problem, some of the discussion below is inapplicable.

Are these really the first 130 words we want people reading on this topic? Can the bulk of this pedantry wait a few paragraphs, after we've explained what the paradox actually is? --Doradus (talk) 14:19, 4 April 2015 (UTC)[reply]

Ok, I've moved most of this text to the end of the section. --Doradus (talk) 14:22, 4 April 2015 (UTC)[reply]

Applicability to the Real World = Original (and bad) Research?

Current text:

"Nozick's additional stipulation, in a footnote in the original article, attempts to preclude this problem by stipulating that any predicted use of a random choice or random event will be treated as equivalent, by the predictor, to a prediction of choosing both boxes. However, this assumes that inherently unpredictable quantum events (e.g. in people's brains) would not come into play anyway during the process of thinking about which choice to make,[12] which is an unproven assumption. Indeed, some have speculated that quantum effects in the brain might be essential for a full explanation of consciousness (see Orchestrated objective reduction), or - perhaps even more relevantly for Newcomb's problem - for an explanation of free will.[13]"

But Nozick's original stipulation clearly deals with consulting external decision-makers, randomness outside of the mind, ie, it's a ban on "opting out." It doesn't at all "assume" quantum events "would not come into play," because it's not internal randomness that matters, it's external randomness. The citations aren't even arguing this point, (12) is about a computational model of the problem and concedes that "You can be modeled as a deterministic or nondeterministic transducer," the article doesn't seem to care whether your mind is random or not. So maybe the easiest way to treat this is as original research. --Thomas Btalk 21:12, 22 May 2015 (UTC)[reply]

Does the problem actually continue to divide philosophers?

I think the Guardian reference is sensationalist and shouldn't be considered a reliable source in this instance; the underlying principle of the "paradox" has been identified and both solutions very succinctly presented. Are philosophers really "divided" about this, or do they simply discuss the different solutions and understand that the solutions are each valid for a game with different probabilities? Or am I giving philosophers too much credit in understanding probability? Bright☀ 09:53, 13 April 2018 (UTC)[reply]

The philpapers source poll shows philosophers give different answers; presumably the dispute is over something like the subjective normative question "what should I do if, without time to prepare in any way, I am suddenly faced with this problem?" Rolf H Nelson (talk) 19:31, 14 April 2018 (UTC)[reply]