Counterfactual conditional

From Wikipedia, the free encyclopedia
Jump to: navigation, search

A counterfactual conditional (abbreviated CF), is a conditional containing an if-clause which is contrary to fact. The term "counterfactual conditional" was coined by Nelson Goodman in 1947,[1] extending Roderick Chisholm's (1946) notion of a "contrary-to-fact conditional".[2] The study of counterfactual speculation has increasingly engaged the interest of scholars in a wide range of domains such as philosophy,[3] human geography, psychology,[4] cognitive psychology,[5] history,[6] political science,[7] economics,[8] social psychology,[9] law,[10] organizational theory,[11] marketing,[12] and epidemiology.[13]

In 1748, when defining causation, David Hume referred to a counterfactual case:

"… we may define a cause to be an object, followed by another, and where all objects, similar to the first, are followed by objects similar to the second. Or in other words, where, if the first object had not been, the second never had existed …" — David Hume, An Enquiry Concerning Human Understanding.[14]


The difference between indicative and counterfactual conditionals, in a context of past time reference, is one of emphasis. This can be illustrated with a pair of examples in which the if clause is in the past indicative in the first example but in the pluperfect subjunctive in the second:

  • If Oswald did not shoot Kennedy, then someone else did.
  • If Oswald had not shot Kennedy, then someone else would have.

The protasis (the if clause) of the first sentence may or may not be true according to the speaker, so the apodosis (the then clause) also may or may not be true; the apodosis is asserted by the speaker to be true if the protasis is true. In this sentence the if clause and the then clause are both in the past tense of the indicative mood.

In the second sentence, the speaker is speaking with a certainty that Oswald did shoot Kennedy. According to the speaker, the if clause is false, so the then clause deals with the counterfactual result, i.e., what would have happened. In this sentence the if clause is in the pluperfect subjunctive form of the subjunctive mood, and the then clause is in the conditional perfect form of the conditional mood.

A corresponding pair of examples with present time reference uses the present indicative in the if clause of the first sentence but the past subjunctive in the second sentence's if clause:

  • If it is raining, then he is inside.
  • If it were raining, then he would be inside.

Here again, in the first sentence the if clause may or may not be true; the then clause may or may not be true but certainly (according to the speaker) is true conditional on the if clause being true. Here both the if clause and the then clause are in the present indicative. In the second sentence, the if clause is not true, while the then clause may or may not be true but certainly would be true in the counterfactual circumstance of the if clause being true. In this sentence, the if clause is in the past subjunctive form of the subjunctive mood, and the then clause is in the conditional mood.

Reversal of clauses[edit]

The use of the terms "antecedent" and "subsequent" is ambiguous. In logic, while "antecedent" usually means the if-clause, it might refer to whichever comes first, so in a statement like, "I'd do it, if I knew how", it's preferable to avoid the use of the words altogether.[citation needed] Protasis and apodosis avoid the issue entirely.


People engage in counterfactual thinking frequently. Experimental evidence indicates that people's thoughts about counterfactual conditionals differ in important ways from their thoughts about indicative conditionals.


Participants in experiments were asked to read sentences, including counterfactual conditionals, e.g., 'if Mark had left home early he would have caught the train'. Afterwards, they were asked to identify which sentences they had been shown. They often mistakenly believed they had been shown sentences corresponding to the presupposed facts, e.g., 'Mark did not leave home early' and 'Mark did not catch the train' (Fillenbaum, 1974). In other experiments, participants were asked to read short stories that contained counterfactual conditionals, e.g., 'if there had been roses in the flower shop then there would have been lilies'. Later in the story they read sentences corresponding to the presupposed facts, e.g., 'there were no roses and there were no lilies'. The counterfactual conditional 'primed' them to read the sentence corresponding to the presupposed facts very rapidly; no such priming effect occurred for indicative conditionals (Santamaria, Espino, and Byrne, 2005). They spend different amounts of time 'updating' a story that contains a counterfactual conditional compared to one that contains factual information (De Vega, Urrutia, and Riffo, 2007) and they focus on different parts of counterfactual conditionals (Ferguson and Sanford, 2008).


Experiments have compared the inferences people make from counterfactual conditionals and indicative conditionals. Given a counterfactual conditional, e.g., 'If there had been a circle on the blackboard then there would have been a triangle', and the subsequent information 'in fact there was no triangle', participants make the modus tollens inference 'there was no circle' more often than they do from an indicative conditional (Byrne and Tasso, 1999). Given the counterfactual conditional and the subsequent information 'in fact there was a circle', participants make the modus ponens inference as often as they do from an indicative conditional. See counterfactual thinking.

Psychological accounts[edit]

Ruth M.J. Byrne proposed in The Rational Imagination: How People Create Alternatives to Reality that people construct mental representations that encompass two possibilities when they understand, and reason from, a counterfactual conditional, e.g., 'if Oswald had not shot Kennedy, then someone else would have'. They envisage the conjecture 'Oswald did not shoot Kennedy and someone else did' and they also think about the presupposed facts 'Oswald did shoot Kennedy and someone else did not' (Byrne, 2005). According to the mental model theory of reasoning, they construct mental models of the alternative possibilities, as described in Deduction (Johnson-Laird and Byrne, 1991).

Philosophical treatments[edit]


In order to distinguish counterfactual conditionals from material conditionals, a new logical connective '>' is defined, where A > B can be interpreted as "If it were the case that A, then it would be the case that B."

The truth value of a material conditional, AB, is determined by the truth values of A and B. This is not so for the counterfactual conditional A > B, for there are different situations agreeing on the truth values of A and B but which yield different evaluations of A > B. For example, if Keith is in Germany, the following two conditionals have both a false antecedent and a false consequent:

  1. if Keith were in Mexico then he would be in Africa.
  2. if Keith were in Mexico then he would be in North America.

Indeed, if Keith is in Germany, then all three conditions "Keith is in Mexico", "Keith is in Africa", and "Keith is in North America" are false. However, (1) is obviously false, while (2) is true as Mexico is part of North America.

Possible world semantics[edit]

Philosophers such as David Lewis and Robert Stalnaker modeled counterfactuals using the possible world semantics of modal logic. The semantics of a conditional A > B are given by some function on the relative closeness of worlds where A is true and B is true, on the one hand, and worlds where A is true but B is not, on the other.

On Lewis's account, A > C is (a) vacuously true if and only if there are no worlds where A is true (for example, if A is logically or metaphysically impossible); (b) non-vacuously true if and only if, among the worlds where A is true, some worlds where C is true are closer to the actual world than any world where C is not true; or (c) false otherwise. Although in Lewis's Counterfactuals it was unclear what he meant by 'closeness', in later writings, Lewis made it clear that he did not intend the metric of 'closeness' to be simply our ordinary notion of overall similarity.

Consider an example:

If I had eaten more at breakfast, I would not have been hungry at 11am.

On Lewis's account, the truth of this statement consists in the fact that, among possible worlds where I ate more for breakfast, there is at least one world where I am not hungry at 11am and which is closer to our world than any world where I ate more for breakfast but am still hungry at 11am.

Stalnaker's account differs from Lewis's most notably in his acceptance of the Limit and Uniqueness Assumptions. The Uniqueness Assumption is the thesis that, for any antecedent A, there is a unique possible world where A is true, while the Limit Assumption is the thesis that, for a given antecedent A, there is a unique set of worlds where A is true that are closest. (Notice that the Uniqueness Assumption entails the Limit Assumption, but the Limit Assumption does not entail the Uniqueness Assumption.) On Stalnaker's account, A > C is non-vacuously true if and only if, at the closest world where A is true, C is true. So, the above example is true just in case at the single, closest world where I eat more breakfast, I don't feel hungry at 11am. Although it is controversial, Lewis rejected the Limit Assumption (and therefore the Uniqueness Assumption) because it rules out the possibility that there might be worlds that get closer and closer to the actual world without limit. For example, there might be an infinite series of worlds, each with my coffee cup a smaller fraction of an inch to the left of its actual position, but none of which is uniquely the closest. (See Lewis 1973: 20.)

One consequence of Stalnaker's acceptance of the Uniqueness Assumption is that, if the law of excluded middle is true, then all instances of the formula (A > C) ∨ (A > ¬C) are true. The law of excluded middle is the thesis that for all propositions p, p ∨ ¬p is true. If the Uniqueness Assumption is true, then for every antecedent A, there is a uniquely closest world where A is true. If the law of excluded middle is true, any consequent C is either true or false at that world where A is true. So for every counterfactual A > C, either A > C or A > ¬C is true. This is called conditional excluded middle (CEM). Consider the following example:

(1) If the fair coin had been flipped, it would have landed heads.
(2) If the fair coin had been flipped, it would have landed tails (i.e. not heads).

On Stalnaker's analysis, there is a closest world where the fair coin mentioned in (1) and (2) is flipped and at that world either it lands heads or it lands tails. So either (1) is true and (2) is false or (1) is false and (2) true. On Lewis's analysis, however, both (1) and (2) are false, for the worlds where the fair coin lands heads are no more or less close than the worlds where they land tails. For Lewis, 'If the coin had been flipped, it would have landed heads or tails' is true, but this does not entail that 'If the coin had been flipped, it would have landed heads, or: If the coin had been flipped it would have landed tails.'

Other accounts[edit]


Counterfactual conditionals may also be evaluated using the so-called Ramsey test: A > B holds if and only if the addition of A to the current body of knowledge has B as a consequence. This condition relates counterfactual conditionals to belief revision, as the evaluation of A > B can be done by first revising the current knowledge with A and then checking whether B is true in what results. Revising is easy when A is consistent with the current beliefs, but can be hard otherwise. Every semantics for belief revision can be used for evaluating conditional statements. Conversely, every method for evaluating conditionals can be seen as a way for performing revision.


Ginsberg (1986) has proposed a semantics for conditionals which assumes that the current beliefs form a set of propositional formulae, considering the maximal sets of these formulae that are consistent with A, and adding A to each. The rationale is that each of these maximal sets represents a possible state of belief in which A is true that is as similar as possible to the original one. The conditional statement A > B therefore holds if and only if B is true in all such sets. Technical criticism of Ginsberg's semantics can be found here.[15]

Within empirical testing[edit]

The counterfactual conditional is the basis of experimental methods for establishing causality in the natural and social sciences, e.g., whether taking antibiotics helps cure bacterial infection. For every individual, u, there is a function that specifies the state of u's infection under two hypothetical conditions: had u taken antibiotic and had u not taken antibiotic. Only one of these states can be observed in any instance, since they are mutually exclusive. The overall effect of antibiotic on infection is defined as the difference between these two states, averaged over the entire population. If the treatment and control groups are selected at random, the effect of antibiotic can be estimated by comparing the rates of recovery in the two groups.


The tight connection between causal and counterfactual relations has prompted Judea Pearl (2000) to reject both the possible world semantics and those of Ramsey and Ginsberg. The latter was rejected because causal information cannot be encoded as a set of beliefs, and the former because it is difficult to fine-tune Lewis's similarity measure to match causal intuition. Pearl defines counterfactuals directly in terms of a "structural equation model" – a set of equations, in which each variable is assigned a value that is an explicit function of other variables in the system. Given such a model, the sentence "Y would be y had X been x" (formally, X = x > Y = y ) is defined as the assertion: If we replace the equation currently determining X with a constant X = x, and solve the set of equations for variable Y, the solution obtained will be Y = y. This definition has been shown to be compatible with the axioms of possible world semantics and forms the basis for causal inference in the natural and social sciences, since each structural equation in those domains corresponds to a familiar causal mechanism that can be meaningfully reasoned about by investigators. [..]

See also[edit]


  1. ^ Goodman, N., "The Problem of Counterfactual Conditionals", The Journal of Philosophy, Vol. 44, No. 5, (27 February 1947), pp. 113–28.
  2. ^ Chisholm, R.M., "The Contrary-to-Fact Conditional", Mind, Vol. 55, No. 220, (October 1946), pp. 289–307.
  3. ^ Goodman, N., "The Problem of Counterfactual Conditionals", The Journal of Philosophy, Vol. 44, No. 5, (27 February 1947), pp. 113–28; Brown, R, & Watling, J., "Counterfactual Conditionals", Mind, Vol. 61, No. 242, (April 1952), pp. 222–33; Parry, W.T., "Reëxamination of the Problem of Counterfactual Conditionals", The Journal of Philosophy, Vol. 54, No. 4, (14 February 1957), pp. 85–94; Cooley, J.C., "Professor Goodman’s Fact, Fiction, & Forecast", The Journal of Philosophy, Vol. 54, No. 10, (9 May 1957), pp. 293–311; Goodman, N., "Parry on Counterfactuals", The Journal of Philosophy, Vol. 54, No. 14, (4 July 1957), pp. 442–45; Goodman, N., "Reply to an Adverse Ally", The Journal of Philosophy, Vol. 54, No. 17, (15 August 1957), pp. 531–35; Lewis, D., Counterfactuals, Basil Blackwell, (Oxford), 1973, etc.
  4. ^ Fillenbaum, S., "Information Amplified: Memory for Counterfactual Conditionals", Journal of Experimental Psychology, Vol. 102, No. 1, (January 1974), pp. 44–49; Crawford, M.T. & McCrea, S.M., "When Mutations meet Motivations: Attitude Biases in Counterfactual Thought", Journal of Experimental Social Psychology, Vol. 40, No. 1, (January 2004), pp. 65–74, etc.
  5. ^ Kahneman, D. & Tversky, A., "The Simulation Heuristic", pp. 201–08 in Kahneman, D., Slovic, p. & Tversky, A. (eds), Judgement Under Uncertainty: Heuristics and Biases, Cambridge University Press, (Cambridge), 1982; Sherman, S.J. & McConnell, A.R., "Dysfunctional Implications of Counterfactual Thinking: When Alternatives to reality Fail Us", pp. 199–231 in Roese, N.J. & Olson, J.M. (eds.), What Might Have Been: The Social Psychology of Counterfactual Thinking, Lawrence Erlbaum Associates, (Mahwah), 1995; Nasco, S.A. & Marsh, K.L., "Gaining Control Through Counterfactual Thinking", Personality and Social Psychology Bulletin, Vol. 25, No. 5, (May 1999), pp. 556–68; McCloy, R. & Byrne, R.M.J., "Counterfactual Thinking About Controllable Events", Memory and Cognition, Vol. 28, No. 6, (September 2000), pp. 1071–78; Byrne, R.M.J., "Mental Models and Counterfactual Thoughts About What Might Have Been", Trends in Cognitive Sciences, Vol. 6, No. 10, (October 2002), pp. 426–31; Thompson, V.A. & Byrne, R.M.J., "Reasoning Counterfactually: Making Inferences About Things That Didn't Happen", Journal of Experimental Psychology: Learning, Memory, and Cognition, Vol. 28, No. 6, (November 2002), pp. 1154–70, etc.
  6. ^ Greenberg, M. (ed.), The Way It Wasn’t: Great Science Fiction Stories of Alternate History, Citadel Twilight, (New York), 1996; Dozois, G. & Schmidt, W. (eds.), Roads Not Taken: Tales of Alternative History, The Ballantine Publishing Group, (New York), 1998; Sylvan, D. & Majeski, S., "A Methodology for the Study of Historical Counterfactuals", International Studies Quarterly, Vol. 42, No. 1, (March 1998), pp. 79–108; Ferguson, N., (ed.), Virtual History: Alternatives and Counterfactuals, Basic Books, (New York), 1999; Cowley, R. (ed.), What If?: The World’s Foremost Military Historians Imagine What Might have Been, Berkley Books, (New York), 2000; Cowley, R. (ed.), What If? 2: Eminent Historians Imagine What Might have Been, G.P. Putnam’s Sons, (New York), 2001, etc.
  7. ^ Fearon, J.D., "Counterfactuals and Hypothesis Testing in Political Science", World Politics, Vol. 43, No. 2, (January 1991), pp. 169–95; Tetlock, P. E. & Belkin, A. (eds.), Counterfactual Thought Experiments in World Politics, Princeton University Press, (Princeton), 1996; Lebow, R.N., "What’s so Different about a Counterfactual?", World Politics, Vol. 52, No. 4, (July 2000), pp. 550–85; Chwieroth, J.M., "Counterfactuals and the Study of the American Presidency", Presidential Studies Quarterly, Vol. 32, No. 2, (June 2002), pp. 293–327, etc.
  8. ^ DN McCloskey (1987) “Counterfactuals” J Eatwell et al., The New Palgrave: A Dictionary of Economics (London: Macmillan), Vol. 1, pp. 701–03; J Heckman (2001) “Econometrics, Counterfactuals and Causal Models.” Keynote Address International Statistical Institute. Seoul, Korea; R Cowan & R Foray 2002) "Evolutionary Economics and the Counterfactual Threat: On the Nature and Role of Counterfactual History as an Empirical Tool in Economics" Journal of Evolutionary Economics, Vol. 12, No. 5 (December), pp. 539–62; JG Hülsmann (2003) “Facts and Counterfactuals in Economic Law” Journal of Libertarian Studies, Vol. 17, No. 1, pp. 57–102; N Cartwright (2007) “Counterfactuals in Economics: A Commentary” Hunting Causes and Using Them – Approaches in Philosophy and Economics (Cambridge, Cambridge University Press), pp. 236ff; M Hashem Pesaran & RP Smith (2012) “Counterfactual Analysis in Macroeconometrics: An Empirical Investigation into the Effects of Quantitative Easing” Research in Economics, Vol. 70, No. 2 (2016), pp. 262–80.
  9. ^ Roese, N.J. & Olson, J.M. (eds.), What Might Have Been: The Social Psychology of Counterfactual Thinking, Lawrence Erlbaum Associates, (Mahwah), 1995; Sanna, L.J., "Defensive Pessimism, Optimism, and Simulating Alternatives: Some Ups and Downs of Prefactual and Counterfactual Thinking", Journal of Personality and Social Psychology, Vol. 71, No. 5, (November 1996), pp. 1020–36; Roese, N.J., "Counterfactual Thinking", Psychological Bulletin, Vol. 121, No. 1, (January 1997), pp. 133–48; Sanna, L.J., "Defensive Pessimism and Optimism: The Bitter-Sweet Influence of Mood on Performance and Prefactual and Counterfactual Thinking", Cognition and Emotion, Vol. 12, No. 5, (September 1998), pp. 635–65; Sanna, L.J. & Turley-Ames, K.J., "Counterfactual Intensity", European Journal of Social Psychology, Vol. 30, No. 2, (March/April 2000), pp. 273–96; Sanna, L.J., Parks, C.D., Meier, S., Chang, E.C., Kassin, B.R., Lechter, J.L., Turley-Ames, K.J. & Miyake, T.M., "A Game of Inches: Spontaneous Use of Counterfactuals by Broadcasters During Major League Baseball Playoffs", Journal of Applied Social Psychology, Vol. 33, No. 3, (March 2003), pp. 455–75, etc.
  10. ^ Strassfeld, R.N., "If...: Counterfactuals in the Law", George Washington Law Review, Volume 60, No. 2, (January 1992), pp. 339–416; Spellman, B.A. & Kincannon, A., "The Relation between Counterfactual (“but for”) and Causal reasoning: Experimental Findings and Implications for Juror’s Decisions", Law and Contemporary Problems, Vol. 64, No. 4, (Autumn 2001), pp. 241–264; Prentice, R.A. & Koehler, J.J., "A Normality Bias in Legal Decision Making", Cornell Law Review, Vol. 88, No. 3, (March 2003), pp. 583–650, etc.
  11. ^ Creyer, E.H. & Gürhan, Z., "Who's to Blame? Counterfactual Reasoning and the Assignment of Blame", Psychology and Marketing, Vol. 14, No. 3, (May 1997), pp. 209–307; Zeelenberg, M., van Dijk, W.W., van der Plight, J., Manstead, A.S.R., van Empelen, P. & Reinderman, D., "Emotional Reactions to the Outcomes of Decisions: The Role of Counterfactual Thought in the Experience of Regret and Disappointment", Organizational Behavior and Human Decision Processes, Vol. 75, No. 2, (August 1998), pp. 117–41; Naquin, C.E. & Tynan, R.O., "The Team Halo Effect: Why Teams Are Not Blamed for Their Failures", Journal of Applied Psychology, Vol. 88, No. 2, (April 2003), pp. 332–40; Naquin, C.E., "The Agony of Opportunity in Negotiation: Number of Negotiable Issues, Counterfactual Thinking, and Feelings of Satisfaction", Organizational Behavior and Human Decision Processes, Vol. 91, No. 1, (May 2003), pp. 97–107, etc.
  12. ^ Hetts, J.J., Boninger, D.S., Armor, D.A., Gleicher, F. & Nathanson, A., "The Influence of Anticipated Counterfactual Regret on Behavior", Psychology & Marketing, Vol. 17, No. 4, (April 2000), pp. 345–68; Landman, J. & Petty, R., "“It Could Have Been You”: How States Exploit Counterfactual Thought to Market Lotteries", Psychology & Marketing, Vol. 17, No. 4, (April 2000), pp. 299–321; McGill, A.L., "Counterfactual Reasoning in Causal Judgements: Implications for Marketing", Psychology & Marketing, Vol. 17, No. 4, (April 2000), pp. 323–43; Roese, N.J., "Counterfactual Thinking and Marketing: Introduction to the Special Issue", Psychology & Marketing, Vol. 17, No. 4, (April 2000), pp. 277–80; Walchli, S.B. & Landman, J., "Effects of Counterfactual Thought on Postpurchase Consumer Affect", Psychology & Marketing, Vol. 20, No. 1, (January 2003), pp. 23–46, etc.
  13. ^ Randerson, J., "Fast action would have saved millions", New Scientist, Vol. 176, No. 2372, (7 December 2002), p. 19; Haydon, D.T., Chase-Topping, M., Shaw, D.J., Matthews, L., Friar, J.K., Wilesmith, J. & Woolhouse, M.E.J., "The Construction and Analysis of Epidemic Trees With Reference to the 2001 UK Foot-and-Mouth Outbreak", Proceedings of the Royal Society of London Series B: Biological Sciences, Vol. 270, No. 1511, (22 January 2003), pp. 121–27, etc.
  14. ^ Hume, D. (Beauchamp, T.L., ed.), An Enquiry Concerning Human Understanding, Oxford University Press, (Oxford), 1999, (7), p. 146: emphasis in the original).
  15. ^ "Review of the paper: M. L. Ginsberg, "Counterfactuals," Artificial Intelligence 30 (1986), pp. 35–79", Zentralblatt für Mathematik, FIZ Karlsruhe – Leibniz Institute for Information Infrastructure GmbH: 13–14, 1989, Zbl 0655.03011 .


  • Bennett, Jonathan. (2003). A Philosophical Guide to Conditionals. Oxford University Press.
  • Bonevac, D. (2003). Deduction, Introductory Symbolic Logic. 2nd ed. Blackwell Publishers.
  • Byrne, R.M.J. (2005). The rational imagination: how people create alternatives to reality. Cambridge, M.A.: MIT Press.
  • Byrne, R.M.J. & Tasso, A. (1999). Deductive reasoning with factual, possible, and counterfactual conditionals. Memory & Cognition. 27, 726–40.
  • De Vega, M., Urrutia, M., Riffo, B. (2007). Canceling updating in the comprehension of counterfactuals embedded in narrative. Memory & Cognition, 35, 1410–21.
  • Edgington, Dorothy. (2001). "Conditionals". In Goble, Lou, ed., The Blackwell Guide to Philosophical Logic. Blackwell.
  • Edgington, Dorothy. (2006). "Conditionals". The Stanford Encyclopedia of Philosophy, Edward Zalta (ed.).
  • Ferguson, H.J. and Sanford, A.J. (2008) Anomalies in real and counterfactual worlds: an eye-movement investigation. J. Mem. Lang. 58, 609–26.
  • Fillenbaum, S. (1974). Information amplified: memory for counterfactual conditionals. Journal of Experimental Psychology, 102, 44–49.
  • Johnson-Laird, P.N. and Byrne, R.M.J. (1991). Deduction. Hillsdale, NJ: Erlbaum.
  • Morgan, Stephen L. and Christopher Winship. (2007). " Counterfactuals and Causal Inference: Methods and Principles of Social Research". Cambridge Eprint.
  • Ginsberg, M. L. (1986). "Counterfactuals". Artificial Intelligence, 30: 35–79.
  • Kożuchowski, Adam. "More than true: the rhetorical function of counterfactuals in historiography" Rethinking History (2015) 10#3 pp. 337–56.
  • Lewis, David. (1973). Counterfactuals. Blackwell Publishers. ISBN 0-631-22425-4
  • Judea Pearl (2000). Causality: Models, Reasoning, and Inference. Cambridge University Press. ISBN 0-521-77362-8. 
  • Santamaria, C., Espino, O. and Byrne, R.M.J. (2005). Counterfactual and semifactual conditionals prime alternative possibilities. Journal of Experimental Psychology:Learning, Memory and Cognition. 31, 1149–54
  • Thompson, V. and Byrne, R.M.J. (2002). Reasoning about things that didn't happen. Journal of Experimental Psychology: Learning, Memory, and Cognition. 28, 1154–70.