The ludic fallacy is a term coined by Nassim Nicholas Taleb in his 2007 book The Black Swan. "Ludic" is from the Latin ludus, meaning "play, game, sport, pastime." It is summarized as "the misuse of games to model real-life situations." Taleb explains the fallacy as "basing studies of chance on the narrow world of games and dice."
It is a central argument in the book and a rebuttal of the predictive mathematical models used to predict the future – as well as an attack on the idea of applying naïve and simplified statistical models in complex domains. According to Taleb, statistics works only in some domains like casinos in which the odds are visible and defined. Taleb's argument centers on the idea that predictive models are based on platonified forms, gravitating towards mathematical purity and failing to take some key ideas into account:
- It is impossible to be in possession of all the information.
- Very small unknown variations in the data could have a huge impact. Taleb does differentiate his idea from that of mathematical notions in chaos theory, e.g. the butterfly effect.
- Theories/Models based on empirical data are flawed, as events that have not taken place before for which no conclusive explanation or account can be provided.
Example 1: Suspicious coin 
One example given in the book is the following thought experiment. There are two people:
- Dr John, who is regarded as a man of science and logical thinking.
- Fat Tony, who is regarded as a man who lives by his wits.
A third party asks them, "assume a fair coin is flipped 99 times, and each time it comes up heads. What are the odds that the 100th flip would also come up heads?"
- Dr John says that the odds are not affected by the previous outcomes so the odds must still be 50:50.
- Fat Tony says that the odds of the coin coming up heads 99 times in a row are so low (less than 1 in 6.33 × 1029) that the initial assumption that the coin had a 50:50 chance of coming up heads is most likely incorrect.
The ludic fallacy here is to assume that in real life the rules from the purely hypothetical model (where Dr John is correct) apply. Would a reasonable person bet on black on a roulette table that has come up red 99 times in a row (especially as the reward for a correct guess is so low when compared with the probable odds that the game is fixed)?
In classical terms, highly statistically significant (unlikely) events should make one question one's model assumptions. In Bayesian statistics, this can be modelled by using a prior distribution for one's assumptions on the fairness of the coin, then Bayesian inference to update this distribution.
Example 2: Job interview 
A man considers going to a job interview. He recently studied statistics and utility theory in college and performed well in the exams. Considering whether to take the interview, he tries to calculate the probability he will get the job versus the cost of the time spent.
This young job seeker forgets that real life has more variables than the small set he has chosen to estimate. Even with a low probability of success, a really good job may be worth the effort of going to the interview. Will he enjoy the process of the interview? Will his interview technique improve regardless of whether he gets the job or not? Even the statistics of the job business are non-linear. What other jobs could come the man's way by meeting the interviewer? Might there be a possibility of a very high pay-off in this company that he has not thought of?[clarification needed]
Example 3: Stock returns 
Any decision theory based on a fixed universe or model of possible outcomes ignores and minimizes the impact of events which are "outside model." For instance, a simple model of daily stock market returns may include extreme moves such as Black Monday (1987) but might not model the market breakdowns following the 2011 Japanese tsunami and its consequences. A fixed model considers the "known unknowns," but ignores the "unknown unknowns."
Relation to Platonicity 
the focus on those pure, well-defined, and easily discernible objects like triangles, or more social notions like friendship or love, at the cost of ignoring those objects of seemingly messier and less tractable structures.
See also 
- Map-territory relation
- Congruence bias
- Déformation professionnelle
- Focusing effect
- Hindsight bias
- Quasi-empiricism in mathematics
- We (novel)
- Demarcation problem
- Wicked problem
- Relevance paradox
- D.P. Simpson, "Cassell's Latin and English Dictionary" (New York: Hungry Minds, 1987) p. 134.
- Black Swans, the Ludic Fallacy and Wealth Management
This article uses bare URLs for citations. (February 2012)
- Nassim Taleb, The Black Swan (New York: Random House, 2007) p. 309.
Further reading 
- The Ludic Fallacy. Chapter from the book "The Black Swan"
- Taleb, Nassim N. (2007). The Black Swan. Random House. ISBN 1-4000-6351-5.
- Medin, D. & Atran, S. (2004) The native mind: Biological categorization and reasoning in development and across cultures. Psychological Review.111, 960–98
- Fodor, J. (1983) Modularity of mind. Cambridge, MA: MIT Press.
- Tales of the Unexpected, Wilmott Magazine, June 2006, pp 30–36
- "A misplaced question". Taleb at Freakonomics blog