Causal decision theory

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Causal decision theory is a school of thought within decision theory which maintains that the expected utility of actions should be evaluated with respect to their potential causal consequences. It contrasts with evidential decision theory, which recommends those actions that will make the actor have the happiest expectations about the outcome.


In a 1981 article, Allan Gibbard and William Harper explained causal decision theory as maximization of the expected utility U of an action A of an action "calculated from probabilities of counterfactuals":[1]

U(A)=\sum\limits_{j} P(A > O_j) D(O_j),

where D(O_j) is the desirability of outcome O_j and P(A > O_j) is the counterfactual probability that, if A were done, then O_j would hold.

Difference from evidential decision theory[edit]

David Lewis proved[2] that the probability of a conditional P(A > O_j) does not always equal the conditional probability P(O_j | A).[3] If that were the case, causal decision theory would be equivalent to evidential decision theory, which uses conditional probabilities.

Gibbard and Harper showed that if we accept two axioms (one related to the controversial principle of the conditional excluded middle[4]), then the statistical independence of A and A > O_j suffices to guarantee that P(A > O_j) = P(O_j | A). However, there are cases in which actions and conditionals are not independent. Gibbard and Harper give an example in which King David wants Bathsheba but fears that summoning her would provoke a revolt.

Further, David has studied works on psychology and political science which teach him the following: Kings have two personality types, charismatic and uncharismatic. A king's degree of charisma depends on his genetic make-up and early childhood experiences, and cannot be changed in adulthood. Now, charismatic kings tend to act justly and uncharismatic kings unjustly. Successful revolts against charismatic kings are rare, whereas successful revolts against uncharismatic kings are frequent. Unjust acts themselves, though, do not cause successful revolts; the reason uncharismatic kings are prone to successful revolts is that they have a sneaky, ignoble bearing. David does not know whether or not he is charismatic; he does know that it is unjust to send for another man's wife. (p. 164)

In this case, evidential decision theory recommends that Solomon abstain from Bathsheba, while causal decision theory—noting that whether David is charismatic or uncharismatic cannot be changed—recommends sending for her.



Newcomb's paradox is a classic example illustrating the potential conflict between causal and evidential decision theory: Because your choice of one or two boxes can't causally affect the Predictor's guess, causal decision theory recommends the two-boxing strategy.[1] However, this results in getting only $1,000, not $1,000,000. Similar concerns arise in problems like the prisoner's dilemma[5] and various other thought experiments.[6]

Probabilities of conditionals[edit]

As Michael John Shaffer points out,[4] there are difficulties with assigning probabilities to counterfactuals. One proposal is the "imaging" technique suggested by Lewis:[7] To evaluate P(A > O_j), move probability mass from each possible world w to the closest possible world w_A in which A holds, assuming A is possible. However, this procedure requires that we know what we would believe if we were certain of A; this is itself a conditional to which we might assign probability less than 1, leading to regress.[4]

See also[edit]


  1. ^ a b Gibbard, A.; Harper, W.L. (1981), "Counterfactuals and two kinds of expected utility", Ifs: Conditionals, Beliefs, Decision, Chance, and Time: 153–190 
  2. ^ Lewis, D. (1976), "Probabilities of conditionals and conditional probabilities", The Philosophical Review (Duke University Press) 85 (3): 297–315, doi:10.2307/2184045, JSTOR 2184045 
  3. ^ In fact, Lewis proved a stronger result: "if a class of probability functions is closed under conditionalizing, then there can be no probability conditional for that class unless the class consists entirely of trivial probability functions," where a trivial probability function is one that "never assigns positive probability to more than two incompatible alternatives, and hence is at most four-valued [...]."
  4. ^ a b c Shaffer, Michael John (2009), "Decision Theory, Intelligent Planning and Counterfactuals", Minds and Machines 19 (1): 61–92, doi:10.1007/s11023-008-9126-2 
  5. ^ Lewis, D. (1979), "Prisoners'dilemma is a Newcomb problem", Philosophy & Public Affairs (Blackwell Publishing) 8 (3): 235–240, JSTOR 2265034 
  6. ^ Egan, A. (2007), "Some counterexamples to causal decision theory", The Philosophical Review 116 (1): 93–114, doi:10.1215/00318108-2006-023, archived from the original on 2009-10-25, retrieved 2009-05-28 
  7. ^ Lewis, D. (1981), "Causal decision theory", Australasian Journal of Philosophy 59 (1): 5–30, doi:10.1080/00048408112340011, retrieved 2009-05-29 

External links[edit]