From Wikipedia, the free encyclopedia
Jump to: navigation, search
This article is about errors in reasoning. For the formal concept in philosophy and logic, see Formal fallacy. For other uses, see Fallacy (disambiguation).

A fallacy is the use of invalid or otherwise faulty reasoning, or "wrong moves"[1] in the construction of an argument.[2][3] A fallacious argument may be deceptive by appearing to be better than it really is. Some fallacies are committed intentionally to manipulate or persuade by deception, while others are committed unintentionally due to carelessness or ignorance. Lawyers acknowledge that the extent to which an argument is sound or unsound depends on the context in which the argument is made.[4]

Fallacies are commonly divided into "formal" and "informal". A formal fallacy can be expressed neatly in a standard system of logic, such as propositional logic,[2] while an informal fallacy originates in an error in reasoning other than an improper logical form.[5] Arguments containing informal fallacies may be formally valid, but still fallacious.[6]


Main article: Formal fallacy

A formal fallacy is a common error of thinking that can neatly be expressed in standard system of logic.[2] An argument that is formally fallacious is rendered invalid due to a flaw in its logical structure. Such an argument is always considered to be wrong.

The presence of a formal fallacy in a deductive argument does not imply anything about the argument's premises or its conclusion. Both may actually be true, or may even be more probable as a result of the argument; but the deductive argument is still invalid because the conclusion does not follow from the premises in the manner described. By extension, an argument can contain a formal fallacy even if the argument is not a deductive one: for instance, an inductive argument that incorrectly applies principles of probability or causality can be said to commit a formal fallacy.

Common examples[edit]


Aristotle was the first to systematize logical errors into a list, as being able to refute an opponent's thesis is one way of winning an argument.[7] Aristotle's "Sophistical Refutations" (De Sophisticis Elenchis) identifies thirteen fallacies. He divided them up into two major types Linguistic fallacies and Non-linguistic fallacies, some depending on language and others that do not depend on language.[8][9] These fallacies are called verbal fallacies and material fallacies, respectively. A material fallacy is an error in what the arguer is talking about, while a verbal fallacy is an error in how the arguer is talking. Verbal fallacies are those in which a conclusion is obtained by improper or ambiguous use of words.[10] An example of a language dependent fallacy is given as a debate as to who amongst humanity are learners: the wise or the ignorant.[11] Language-independent fallacies may be more complex, e.g.:

  1. "Coriscus is different from Socrates."
  2. "Socrates is a man."
  3. "Therefore, Coriscus is different from a man."[12]

Whately's grouping[edit]

Richard Whately defines a fallacy broadly as, "any argument, or apparent argument, which professes to be decisive of the matter at hand, while in reality it is not".[13]

Whately divided fallacies into two groups: logical and material. According to Whately, logical fallacies are arguments where the conclusion does not follow from the premises. Material fallacies are not logical errors because the conclusion does follow from the premises. He then divided the logical group into two groups: purely logical and semi-logical. The semi-logical group included all of Aristotle's sophisms except:ignoratio elenchi, petitio principii, and non causa pro causa, which are in the material group.[14]


Sometimes a speaker or writer uses a fallacy intentionally. In any context, including academic debate, a conversation among friends, political discourse, advertising, or for comedic purposes, the arguer may use fallacious reasoning to try to persuade the listener or reader, by means other than offering relevant evidence, that the conclusion is true.

Examples of this include the speaker or writer:[15]

  1. Diverting the argument to unrelated issues with a red herring (Ignoratio elenchi)
  2. Insulting someone's character (argumentum ad hominem)
  3. Assume the conclusion of an argument, a kind of circular reasoning, also called "begging the question" (petitio principi)
  4. Making jumps in logic (non-sequitur)
  5. Identifying a false cause and effect (post hoc ergo propter hoc)
  6. Asserting that everyone agrees (bandwagoning)
  7. Creating a "false dilemma" ("either-or fallacy") in which the situation is oversimplified
  8. Selectively using facts (card-stacking)
  9. Making false or misleading comparisons (false equivalence and false analogy)
  10. Generalizing quickly and sloppily (hasty generalization)

In humor, errors of reasoning are used for comical purposes. Groucho Marx used fallacies of amphiboly, for instance, to make ironic statements; Gary Larson and Scott Adams employed fallacious reasoning in many of their cartoons. Wes Boyer and Samuel Stoddard have written a humorous essay teaching students how to be persuasive by means of a whole host of informal and formal fallacies.[16]


In philosophy, the term formal fallacy is used for logical fallacies and defined formally as: a flaw in the structure of a deductive argument which renders the argument invalid. The term is preferred as logic is the use of valid reasoning and a fallacy is an argument that uses poor reasoning therefore the term logical fallacy is an oxymoron. However, the same terms are used in informal discourse to mean an argument which is problematic for any reason. A logical form such as "A and B" is independent of any particular conjunction of meaningful propositions. Logical form alone can guarantee that given true premises, a true conclusion must follow. However, formal logic makes no such guarantee if any premise is false; the conclusion can be either true or false. Any formal error or logical fallacy similarly invalidates the deductive guarantee. Both the argument and all its premises must be true for a statement to be true.

Paul Meehl[edit]

In Why I Do Not Attend Case Conferences (1973),[17] psychologist Paul Meehl discusses several fallacies that can arise in medical case conferences that are primarily held to diagnose patients. These fallacies can also be considered more general errors of thinking that all individuals (not just psychologists) are prone to making.

  • Barnum effect: Making a statement that is trivial, and true of everyone, e.g. of all patients, but which appears to have special significance to the diagnosis.
  • Sick-sick fallacy ("pathological set"): The tendency to generalize from personal experiences of health and ways of being, to the identification of others who are different from ourselves as being "sick". Meehl emphasizes that though psychologists claim to know about this tendency, most are not very good at correcting it in their own thinking.
  • "Me too" fallacy: The opposite of Sick-sick. Imagining that "everyone does this" and thereby minimizing a symptom without assessing the probability of whether a mentally healthy person would actually do it. A variation of this is Uncle George's pancake fallacy. This minimizes a symptom through reference to a friend/relative who exhibited a similar symptom, thereby implying that it is normal. Meehl points out that consideration should be given that the patient is not healthy by comparison but that the friend/relative is unhealthy.
  • Multiple Napoleons fallacy: "It's not real to us, but it's 'real' to him." A relativism that Meehl sees as a waste of time. There is a distinction between reality and delusion that is important to make when assessing a patient and so the consideration of comparative realities can mislead and distract from the importance of a patient's delusion to a diagnostic decision.[clarification needed]
  • Hidden decisions: Decisions based on factors that we do not own up to or challenge, and for example result in the placing of middle- and upper-class patients in therapy while lower-class patients are given medication. Meehl identifies these decisions as related to an implicit ideal patient who is young, attractive, verbal, intelligent, and successful (YAVIS). He sees YAVIS patients as being preferred by psychotherapists because they can pay for long-term treatment and are more enjoyable to interact with.
  • The spun-glass theory of the mind: The belief that the human organism is so fragile that minor negative events, such as criticism, rejection, or failure, are bound to cause major trauma to the system. Essentially not giving humans, and sometimes patients, enough credit for their resilience and ability to recover.
  • “Crummy criterion fallacy”: This fallacy refers to how psychologists inappropriately explain away the technical aspects of tests, seeing the said aspects as crummy and presenting them in a poor manner, rather than incorporate them into the interview, life-history, and other material being presented at case conferences.
  • “Understanding it makes it normal”: The act of normalizing or excusing a behavior just because one understand the cause or function of it, regardless of its normalcy or appropriateness. For example, a psychologist would be guilty of committing this fallacy if he or she began to see the behavior of criminal clients as normal because of understanding how such behavior came about.
  • “Assumptions that content and dynamics explain why this person is abnormal”: Those who seek psychological services have certain characteristics associated with the fact they are seeking services. However, not only do they have the characteristics of clients but also characteristics of being human. To attribute one’s complete life dysfunction to attributes that make one a patient ignores the fact that some problems are just human problems.
  • “Identifying the softhearted with the softheaded”: The belief that those who have sincere concern for the suffering (i.e., the softhearted) are often seen as one and the same as those who tend to be wrong in logical and empirical decisions (i.e., softheaded).
  • “Ad hoc fallacy”: Creating explanations after we have been presented with evidence that is consistent with what has now been proven. For example, in clinical psychology, this occurs when one explains why a patient is the way he or she is, based only on the evidence relevant to the explanation.
  • “Doing it the hard way”: Going about a task in a more difficult manner when an equivalent easier option exists; for example, in clinical psychology, using an unnecessary instrument or procedure that can be difficult and time consuming while the same information can be ascertained through interviewing or interacting with the client.
  • “Social scientists’ anti-biology bias”: Meehl’s belief that social scientists like psychologists, sociologists, and psychiatrists have a tendency to react negatively to biological factors in abnormality, therefore tending to be anti-drug, anti-genetic, and anti-EST.
  • “Double standard of evidential morals”: When one is making an argument and requires less evidence for his or herself than does so for another.


Increasing availability and circulation of big data are driving proliferation of new metrics for scholarly authority,[18][19] and there is lively discussion regarding the relative usefulness of such metrics for measuring the value of knowledge production in the context of an "information tsunami."[20] Where mathematical fallacies are subtle mistakes in reasoning leading to invalid mathematical proofs, measurement fallacies are unwarranted inferential leaps involved in the extrapolation of raw data to a measurement-based value claim. The ancient Greek Sophist Protagoras was one of the first thinkers to propose that humans can generate reliable measurements through his "human-measure" principle and the practice of dissoi logoi (arguing multiple sides of an issue).[21][22] This history helps explain why measurement fallacies are informed by informal logic and argumentation theory.

  • Anchoring fallacy: Anchoring is a cognitive bias, first theorized by Amos Tversky and Daniel Kahneman, that "describes the common human tendency to rely too heavily on the first piece of information offered (the 'anchor') when making decisions." In measurement arguments, anchoring fallacies can occur when unwarranted weight is given to data generated by metrics that the arguers themselves acknowledge is flawed. For example, limitations of the Journal Impact Factor (JIF) are well documented,[23] and even JIF pioneer Eugene Garfield notes, "while citation data create new tools for analyses of research performance, it should be stressed that they supplement rather than replace other quantitative-and qualitative-indicators."[24] To the extent that arguers jettison acknowledged limitations of JIF-generated data in evaluative judgments, or leave behind Garfield's "supplement rather than replace" caveat, they court commission of anchoring fallacies.
  • Naturalistic fallacy: In the context of measurement, a naturalistic fallacy can occur in a reasoning chain that makes an unwarranted extrapolation from "is" to "ought," as in the case of sheer quantity metrics based on the premise "more is better"[20] or, in the case of developmental assessment in the field of psychology, "higher is better."[25]
  • False analogy: In the context of measurement, this error in reasoning occurs when claims are supported by unsound comparisons between data points, hence the false analogy's informal nickname of the "apples and oranges" fallacy.[26] For example, the Scopus and Web of Science bibliographic databases have difficulty distinguishing between citations of scholarly work that are arms-length endorsements, ceremonial citations, or negative citations (indicating the citing author withholds endorsement of the cited work).[27] Hence, measurement-based value claims premised on the uniform quality of all citations may be questioned on false analogy grounds.
  • Argumentum ex silentio: An argument from silence features an unwarranted conclusion advanced based on the absence of data. For example, Academic Analytics' Faculty Scholarly Productivity Index purports to measure overall faculty productivity, yet the tool does not capture data based on citations in books. This creates a possibility that low productivity measurements using the tool may constitute argumentum ex silentio fallacies, to the extent that such measurements are supported by the absence of book citation data.
  • Ecological fallacy: An ecological fallacy is committed when one draws an inference from data based on the premise that qualities observed for groups necessarily hold for individuals; for example, "if countries with more Protestants tend to have higher suicide rates, then Protestants must be more likely to commit suicide."[28] In metrical argumentation, ecological fallacies can be committed when one measures scholarly productivity of a sub-group of individuals (e.g. "Puerto Rican" faculty) via reference to aggregate data about a larger and different group (e.g. "Hispanic" faculty).[29]

Other systems of classification[edit]

Of other classifications of fallacies in general the most famous are those of Francis Bacon and J. S. Mill. Bacon (Novum Organum, Aph. 33, 38 sqq.) divided fallacies into four Idola (Idols, i.e. False Appearances), which summarize the various kinds of mistakes to which the human intellect is prone. With these should be compared the Offendicula of Roger Bacon, contained in the Opus maius, pt. i. J. S. Mill discussed the subject in book v. of his Logic, and Jeremy Bentham's Book of Fallacies (1824) contains valuable remarks. See Rd. Whateley's Logic, bk. v.; A. de Morgan, Formal Logic (1847) ; A. Sidgwick, Fallacies (1883) and other textbooks.

Assessment — pragmatic theory[edit]

According to the pragmatic theory,[30] a fallacy can in some instances be an error a fallacy, use of a heuristic (short version of an argumentation scheme) to jump to a conclusion. However, even more worryingly, in other instances it is a tactic or ploy used inappropriately in argumentation to try to get the best of a speech part unfairly. There are always two parties to an argument containing a fallacy — the perpetrator and the intended victim. The dialogue framework required to support the pragmatic theory of fallacy is built on the presumption that argumentative dialogue has both an adversarial component and a collaborative component. A dialogue has individual goals for each participant, but also collective (shared) goals that apply to all participants. A fallacy of the second kind is seen as more than simply violation of a rule of reasonable dialogue. It is also a deceptive tactic of argumentation, based on sleight-of-hand. Aristotle explicitly compared contentious reasoning to unfair fighting in athletic contest. But the roots of the pragmatic theory go back even further in history to the Sophists. The pragmatic theory finds its roots in the Aristotelian conception of a fallacy as a sophistical refutation, but also supports the view that many of the types of arguments traditionally labelled as fallacies are in fact reasonable techniques of argumentation that can be used, in many cases, to support legitimate goals of dialogue. Hence on the pragmatic approach, each case needs to analyzed individually, to determine by the textual evidence whether the argument is fallacious or reasonable.

Logical fallacies[edit]

Fallacies are defects that weaken arguments; Logical fallacies are errors in reasoning that invalidate the argument. McMullin (2000), a clinical psychologist, explains that: "Logical fallacies are unsubstantiated assertions that are often delivered with a conviction that makes them sound as though they are proven facts".[31] It is important to understand what fallacies are so that you can recognize them in either your own or others’ writing. Avoiding fallacies will strengthen your ability to produce strong arguments. It is important to note that;

Fallacious arguments are very, very common and can be quite persuasive, at least to the casual reader or listener. You can find dozens of examples of fallacious reasoning in newspapers, advertisements, and other sources. It is sometimes hard to evaluate whether an argument is fallacious. An argument might be very weak, somewhat weak, somewhat strong, or very strong. An argument that has several stages or parts might have some strong sections and some weak ones.

Examples of types of logical fallacies[edit]

Hasty generalization[edit]

Definition: Making assumptions about a whole group or range of cases based on a sample that is inadequate (usually because it is atypical or just too small). Stereotypes about people ("frat boys are drunkards," "grad students are nerdy," “women don’t enjoy sport” etc.) are a common example of the principle underlying hasty generalization.

Missing the point[edit]

Definition: The premises of an argument do support a particular conclusion--but not the conclusion that the arguer actually draws.

Post hoc (false cause)[edit]

This fallacy gets its name from the Latin phrase "post hoc, ergo propter hoc," which translates as "after this, therefore because of this." Definition: Assuming that because B comes after A, A caused B. Of course, sometimes one event really does cause another one that comes later—for example, if I register for a class, and my name later appears on the roll, it's true that the first event caused the one that came later. But sometimes two events that seem related in time aren't really related as cause and event. That is, correlation isn't the same thing as causation.

Slippery slope[edit]

Definition: The arguer claims that a sort of chain reaction, usually ending in some dire consequence, will take place, but there's really not enough evidence for that assumption. The arguer asserts that if we take even one step onto the "slippery slope," we will end up sliding all the way to the bottom; he or she assumes we can't stop halfway down the hill.[32]

See also[edit]


  1. ^ Frans, van Eemeren; Bart, Garssen; Bert, Meuffels (2009). "1". Fallacies and judgments of reasonableness, Empirical Research Concerning the Pragma-Dialectical Discussion Rules. Dordrecht: Springer Science+Business Media B.V. p. 1. ISBN 978-90-481-2613-2. 
  2. ^ a b c Harry J. Gensler, The A to Z of Logic (2010:p74). Rowman & Littlefield, ISBN 9780810875968
  3. ^ Woods, John (2004). The Death of Argument. Applied Logic Series. 32. pp. 3–23. ISBN 9789048167005. 
  4. ^ Bustamente, Thomas; Dahlman, Christian, eds. (2015). Argument types and fallacies in legal argumentation. Heidelberg: Springer International Publishing. p. x. ISBN 978-3-319-16147-1. 
  5. ^ "Informal Fallacies, Northern Kentucky University". Retrieved 2013-09-10. 
  6. ^ Dowden, Bradley. "Fallacy". Internet Encyclopedia of Philosophy. Retrieved 17 February 2016. 
  7. ^ Frans, van Eemeren; Bart, Garssen; Bert, Meuffels (2009). "1". Fallacies and judgements of reasonableness, Empirical Research Concerning the Pragma-Dialectical Discussion Rules. Dordrecht: Springer Science+Business Media B.V. p. 2. ISBN 978-90-481-2613-2. 
  8. ^ "Aristotle's original 13 fallacies". The Non Sequitur. Retrieved 2013-05-28. 
  9. ^ http://www.logiclaw.co.uk/fallacies/Straker3.html
  10. ^ "PHIL 495: Philosophical Writing (Spring 2008), Texas A&M University". Archived from the original on 2008-09-05. Retrieved 2013-09-10. 
  11. ^ Frans, van Eemeren; Bart, Garssen; Bert, Meuffels (2009). "1". Fallacies and judgements of reasonableness, Empirical Research Concerning the Pragma-Dialectical Discussion Rules. Dordrecht: Springer Science+Business Media B.V. p. 3. ISBN 978-90-481-2613-2. 
  12. ^ Frans, van Eemeren; Bart, Garssen; Bert, Meuffels (2009). "1". Fallacies and judgements of reasonableness, Empirical Research Concerning the Pragma-Dialectical Discussion Rules. Dordrecht: Springer Science+Business Media B.V. p. 4. ISBN 978-90-481-2613-2. 
  13. ^ Frans H. van Eemeren, Bart Garssen, Bert Meuffels (2009). Fallacies and Judgments of Reasonableness: Empirical Research Concerning the Pragma-Dialectical Discussion Rules, p.8. ISBN 9789048126149.
  14. ^ Coffey, P. (1912). The Science of Logic. Longmans, Green, and Company. p. 302. LCCN 12018756. Retrieved 2016-02-22. 
  15. ^ Ed Shewan (2003). Applications of Grammar: Principles of Effective Communication (2nd ed.). Christian Liberty Press. pp. 92 ff. ISBN 1-930367-28-7. Retrieved 2016-02-22. 
  16. ^ Boyer, Web. "How to Be Persuasive". Retrieved 2012-12-05. 
  17. ^ Meehl, Paul Everett (1973). "Why I Do Not Attend Case Conferences". Psychodiagnosis: Selected Papers. University of Minnesota Press. pp. 225–302. ISBN 978-0-8166-0685-6. Retrieved 27 April 2017. 
  18. ^ Meho, Lokman (2007). "The Rise and Rise of Citation Analysis" (PDF). Physics World. January: 32–36. Retrieved October 28, 2013. 
  19. ^ Jensen, Michael (June 15, 2007). "The New Metrics of Scholarly Authority". Chronicle Review. Retrieved 28 October 2013. 
  20. ^ a b Baveye, Phillippe C. (2010). "Sticker Shock and Looming Tsunami: The High Cost of Academic Serials in Perspective". Journal of Scholarly Publishing. 41: 191–215. doi:10.1353/scp.0.0074. 
  21. ^ Schiappa, Edward (1991). Protagoras and Logos: A Study in Greek Philosophy and Rhetoric. Columbia, SC: University of South Carolina Press. ISBN 0872497585. 
  22. ^ Protagoras (1972). The Older Sophists. Indianapolis, IN: Hackett Publishing Co. ISBN 0872205568. 
  23. ^ National Communication Journal (2013). Impact Factors, Journal Quality, and Communication Journals: A Report for the Council of Communication Associations (PDF). Washington, D.C.: National Communication Association. Retrieved 2016-02-22. 
  24. ^ Gafield, Eugene (1993). "What Citations Tell us About Canadian Research,". Canadian Journal of Library and Information Science. 18 (4): 34. 
  25. ^ Stein, Zachary (October 2008). "Myth Busting and Metric Making: Refashioning the Discourse about Development". Integral Leadership Review. 8 (5). Archived from the original on 2013-10-30. Retrieved 28 October 2013. 
  26. ^ Kornprobst, Markus (2007). "Comparing Apples and Oranges? Leading and Misleading Uses of Historical Analogies". Millennium — Journal of International Studies. 36: 29–49. doi:10.1177/03058298070360010301. Archived from the original on 30 October 2013. Retrieved 29 October 2013. 
  27. ^ Meho, Lokman (2007). "The Rise and Rise of Citation Analysis" (PDF). Physics World. January: 32. Retrieved October 28, 2013. 
  28. ^ Freedman, David A. (2004). Michael S. Lewis-Beck & Alan Bryman & Tim Futing Liao, ed. Encyclopedia of Social Science Research Methods. Thousand Oaks, CA: Sage. pp. 293–295. ISBN 0761923632. 
  29. ^ Allen, Henry L. (1997). "Faculty Workload and Productivity: Ethnic and Gender Disparities" (PDF). NEA 1997 Almanac of Higher Education: 39. Retrieved 29 October 2013. 
  30. ^ Walton, Douglas (1995). A Pragmatic Theory of Fallacy. Tuscaloosa: University of Alabama Press. 
  31. ^ McMullin, R, (2000) The New Handbook of Cognitive Therapy Techniques. New York: W. W. Norton & Company Ltd
  32. ^ http://www.webpages.uidaho.edu/eng207-td/Logic%20and%20Analysis/most_common_logical_fallacies.htm

Further reading[edit]

  • C. L. Hamblin, Fallacies, Methuen London, 1970. reprinted by Vale Press in 1998 as ISBN 0-916475-24-7.
  • Hans V. Hansen; Robert C. Pinto (1995). Fallacies: classical and contemporary readings. Penn State Press. ISBN 978-0-271-01417-3. 
  • Frans van Eemeren; Bart Garssen; Bert Meuffels (2009). Fallacies and Judgments of Reasonableness: Empirical Research Concerning the Pragma-Dialectical Discussion. Springer. ISBN 978-90-481-2613-2. 
  • Douglas N. Walton, Informal logic: A handbook for critical argumentation. Cambridge University Press, 1989.
  • Douglas, Walton (1987). Informal Fallacies. Amsterdam: John Benjamins. 
  • Walton, Douglas (1995). A Pragmatic Theory of Fallacy. Tuscaloosa: University of Alabama Press. 
  • Walton, Douglas (2010). "Why Fallacies Appear to Be Better Arguments than They Are". Informal Logic. 30 (2): 159–184. 
  • John Woods (2004). The death of argument: fallacies in agent based reasoning. Springer. ISBN 978-1-4020-2663-8. 

Historical texts

External links[edit]