If "this sentence is false" is true, then the sentence is false, but then if "this sentence is false" is false, then the sentence is true, and so on.
- 1 History
- 2 Explanation of the paradox and variants
- 3 Possible resolutions
- 4 Logical structure of the liar paradox
- 5 Applications
- 6 In popular culture
- 7 See also
- 8 Notes
- 9 References
- 10 External links
The Epimenides paradox (circa 600 BC) has been suggested as an example of the liar paradox, but they are not logically equivalent. The semi-mythical seer Epimenides, a Cretan, reportedly stated that "The Cretans are always liars." However, Epimenides' statement that all Cretans are liars can be resolved as false, given that he knows of at least one other Cretan who does not lie.
The paradox's name translates as pseudómenos lógos (ψευδόμενος λόγος) in Ancient Greek. One version of the liar paradox is attributed to the Greek philosopher Eubulides of Miletus who lived in the 4th century BC. Eubulides reportedly asked, "A man says that he is lying. Is what he says true or false?"
The paradox was once discussed by St. Jerome in a sermon:
"I said in my alarm, 'Every man is a liar!' "(Psalms 116:11) Is David telling the truth or is he lying? If it is true that every man is a liar, and David's statement, "Every man is a liar" is true, then David also is lying; he, too, is a man. But if he, too, is lying, his statement: "Every man is a liar," consequently is not true. Whatever way you turn the proposition, the conclusion is a contradiction. Since David himself is a man, it follows that he also is lying; but if he is lying because every man is a liar, his lying is of a different sort."
In early Islamic tradition liar paradox was discussed for at least five centuries starting from late 9th century apparently without being influenced by any other tradition. Naṣīr al-Dīn al-Ṭūsī could have been the first logician to identify the liar paradox as self-referential.
Explanation of the paradox and variants
The problem of the liar paradox is that it seems to show that common beliefs about truth and falsity actually lead to a contradiction. Sentences can be constructed that cannot consistently be assigned a truth value even though they are completely in accord with grammar and semantic rules.
The simplest version of the paradox is the sentence:
This statement is false. (A)
If (A) is true, then "This statement is false" is true. Therefore (A) must be false. The hypothesis that (A) is true leads to the conclusion that (A) is false, a contradiction.
If (A) is false, then "This statement is false" is false. Therefore (A) must be true. The hypothesis that (A) is false leads to the conclusion that (A) is true, another contradiction. Either way, (A) is both true and false, which is a paradox.
However, that the liar sentence can be shown to be true if it is false and false if it is true has led some to conclude that it is "neither true nor false". This response to the paradox is, in effect, the rejection of the claim that every statement has to be either true or false, also known as the principle of bivalence, a concept related to the law of the excluded middle.
The proposal that the statement is neither true nor false has given rise to the following, strengthened version of the paradox:
This statement is not true. (B)
If (B) is neither true nor false, then it must be not true. Since this is what (B) itself states, it means that (B) must be true. Since initially (B) was not true and is now true, another paradox arises.
Another reaction to the paradox of (A) is to posit, as Graham Priest has, that the statement is both true and false. Nevertheless, even Priest's analysis is susceptible to the following version of the liar:
This statement is only false. (C)
If (C) is both true and false, then (C) is only false. But then, it is not true. Since initially (C) was true and is now not true, it is a paradox.
There are also multi-sentence versions of the liar paradox. The following is the two-sentence version:
The following statement is true. (D1)
The preceding statement is false. (D2)
Assume (D1) is true. Then (D2) is true. This would mean that (D1) is false. Therefore (D1) is both true and false.
Assume (D1) is false. Then (D2) is false. This would mean that (D1) is true. Thus (D1) is both true and false. Either way, (D1) is both true and false - the same paradox as (A) above.
The multi-sentence version of the liar paradox generalizes to any circular sequence of such statements (wherein the last statement asserts the truth/falsity of the first statement), provided there are an odd number of statements asserting the falsity of their successor; the following is a three-sentence version, with each statement asserting the falsity of its successor:
E2 is false. (E1)
E3 is false. (E2)
E1 is false. (E3)
Assume (E1) is true. Then (E2) is false, which means (E3) is true, and hence (E1) is false, leading to a contradiction.
Assume (E1) is false. Then (E2) is true, which means (E3) is false, and hence (E1) is true. Either way, (E1) is both true and false - the same paradox as with (A) and (D1).
There are many other variants, and many complements, possible. In normal sentence construction, the simplest version of the complement is the sentence:
This statement is true. (F)
If F is assumed to bear a truth value, then it presents the problem of determining the object of that value. But, a simpler version is possible, by assuming that the single word 'true' bears a truth value. The analogue to the paradox is to assume that the single word 'false' likewise bears a truth value, namely that it is false. This reveals that the paradox can be reduced to the mental act of assuming that the very idea of fallacy bears a truth value, namely that the very idea of fallacy is false: an act of misrepresentation. So, the symmetrical version of the paradox would be:
The following statement is false. (G1)
The preceding statement is false. (G2)
Alfred Tarski diagnosed the paradox as arising only in languages that are "semantically closed", by which he meant a language in which it is possible for one sentence to predicate truth (or falsehood) of another sentence in the same language (or even of itself). To avoid self-contradiction, it is necessary when discussing truth values to envision levels of languages, each of which can predicate truth (or falsehood) only of languages at a lower level. So, when one sentence refers to the truth-value of another, it is semantically higher. The sentence referred to is part of the "object language", while the referring sentence is considered to be a part of a "meta-language" with respect to the object language. It is legitimate for sentences in "languages" higher on the semantic hierarchy to refer to sentences lower in the "language" hierarchy, but not the other way around. This prevents a system from becoming self-referential.
Arthur Prior asserts that there is nothing paradoxical about the liar paradox. His claim (which he attributes to Charles Sanders Peirce and John Buridan) is that every statement includes an implicit assertion of its own truth. Thus, for example, the statement, "It is true that two plus two equals four", contains no more information than the statement "two plus two equals four", because the phrase "it is true that..." is always implicitly there. And in the self-referential spirit of the Liar Paradox, the phrase "it is true that..." is equivalent to "this whole statement is true and ...".
Thus the following two statements are equivalent:
This statement is false.
This statement is true and this statement is false.
The latter is a simple contradiction of the form "A and not A", and hence is false. There is therefore no paradox because the claim that this two-conjunct Liar is false does not lead to a contradiction. Eugene Mills and Neil Lefebvre and Melissa Schelein present similar answers.
A majority of what Jones says about me is false.
and Jones says only these three things about Smith:
Smith is a big spender.
Smith is soft on crime.
Everything Smith says about me is true.
If Smith really is a big spender but is "not" soft on crime, then both Smith's remark about Jones and Jones's last remark about Smith are paradoxical.
Kripke proposes a solution in the following manner. If a statement's truth value is ultimately tied up in some evaluable fact about the world, that statement is "grounded". If not, that statement is "ungrounded". Ungrounded statements do not have a truth value. Liar statements and liar-like statements are ungrounded, and therefore have no truth value.
Jon Barwise and John Etchemendy
Jon Barwise and John Etchemendy propose that the liar sentence (which they interpret as synonymous with the Strengthened Liar) is ambiguous. They base this conclusion on a distinction they make between a "denial" and a "negation". If the liar means, "It is not the case that this statement is true", then it is denying itself. If it means, "This statement is not true", then it is negating itself. They go on to argue, based on situation semantics, that the "denial liar" can be true without contradiction while the "negation liar" can be false without contradiction. Their 1987 book makes heavy use of non-well-founded set theory.
Graham Priest and other logicians, including J.C. Beall, and Bradley Armour-Garb have proposed that the liar sentence should be considered to be both true and false, a point of view known as dialetheism. Dialetheism is the view that there are true contradictions. Dialetheism raises its own problems. Chief among these is that since dialetheism recognizes the liar paradox, an intrinsic contradiction, as being true, it must discard the long-recognized principle of explosion, which asserts that any proposition can be deduced from a contradiction, unless the dialetheist is willing to accept trivialism - the view that all propositions are true. Since trivialism is an intuitively false view, dialetheists nearly always reject the explosion principle. Logics that reject it are called paraconsistent.
Andrew Irvine has argued in favour of a non-cognitivist solution to the paradox, suggesting that some apparently well-formed sentences will turn out to be neither true nor false and that "formal criteria alone will inevitably prove insufficient" for resolving the paradox.
Logical structure of the liar paradox
For a better understanding of the liar paradox, it is useful to write it down in a more formal way. If "this statement is false" is denoted by A and its truth value is being sought, it is necessary to find a condition that restricts the choice of possible truth values of A. Because A is self-referential it is possible to give the condition by an equation.
If some statement, B, is assumed to be false, one writes, “B = false”. The statement (C) that the statement B is false would be written as “C = “B = false””. Now, the liar paradox can be expressed as the statement A, that A is false:
“A = “A = false””
This is an equation from which the truth value of A = "this statement is false" could hopefully be obtained. In the boolean domain "A = false" is equivalent to "not A" and therefore the equation is not solvable. This is the motivation for reinterpretation of A. The simplest logical approach to make the equation solvable is the dialetheistic approach, in which case the solution is A being both "true" and "false". Other resolutions mostly include some modifications of the equation; Arthur Prior claims that the equation should be "A = 'A = false and A = true'" and therefore A is false. In computational verb logic, the liar paradox is extended to statement like, "I hear what he says; he says what I don't hear", where verb logic must be used to resolve the paradox.
Gödel's First Incompleteness Theorem
Gödel's incompleteness theorems are two fundamental theorems of mathematical logic which state inherent limitations of all but the most trivial axiomatic systems for mathematics. The theorems were proven by Kurt Gödel in 1931, and are important in the philosophy of mathematics. Roughly speaking, in proving the first incompleteness theorem, Gödel used a modified version of the liar paradox, replacing "this sentence is false" with "this sentence is not provable", called the "Gödel sentence G". Thus for a theory "T", "G" is true, but not provable in "T". The analysis of the truth and provability of "G" is a formalized version of the analysis of the truth of the liar sentence.
To prove the first incompleteness theorem, Gödel represented statements by numbers. Then the theory at hand, which is assumed to prove certain facts about numbers, also proves facts about its own statements. Questions about the provability of statements are represented as questions about the properties of numbers, which would be decidable by the theory if it were complete. In these terms, the Gödel sentence states that no natural number exists with a certain, strange property. A number with this property would encode a proof of the inconsistency of the theory. If there were such a number then the theory would be inconsistent, contrary to the consistency hypothesis. So, under the assumption that the theory is consistent, there is no such number.
It is not possible to replace "not provable" with "false" in a Gödel sentence because the predicate "Q is the Gödel number of a false formula" cannot be represented as a formula of arithmetic. This result, known as Tarski's undefinability theorem, was discovered independently by Gödel (when he was working on the proof of the incompleteness theorem) and by Alfred Tarski.
In popular culture
The liar paradox is occasionally used in fiction to shut down artificial intelligences, who are presented as being unable to process the sentence. In Star Trek: The Original Series episode I, Mudd, the Liar paradox is used by Captain Kirk and Harry Mudd to confuse and ultimately disable an android holding them captive. In the 1973 Doctor Who serial The Green Death, the Doctor temporarily stumps the insane computer BOSS by asking it "If I were to tell you that the next thing I say would be true, but the last thing I said was a lie, would you believe me?" However BOSS eventually decides the question is irrelevant and summons security. In the 2011 videogame Portal 2, GLaDOS attempts to use the "this sentence is false" paradox to defeat the naïve artificial intelligence Wheatley, but, lacking the intelligence to realize the statement a paradox, he simply responds, "Um, true. I'll go with true. There, that was easy." and is unaffected.
The second book in Emily Rodda's Deltora Quest series, The Lake of Tears has the main character, Lief, forced to answer a riddle correctly or be killed by the guardian of a bridge. When Lief answers the trick riddle wrongly, he confronts the guardian with his treachery. The guardian answers with another riddle, telling Lief to make a statement; if false, he will kill Lief by chopping off his head; if true, he will strangle Lief. Lief replies, "You will cut off my head." As the guardian was cursed to his fate by the evil sorceress Thaegan 'until truth and lies become one', the paradox allows him to revert to his original form: an eagle.
- Card paradox
- Epimenides paradox
- List of paradoxes
- Pinocchio paradox
- Quine's paradox
- Socratic paradox
- Yablo's paradox
- Epimenides paradox has "All Cretans are liars." Titus 1: 12
- Andrea Borghini. "Paradoxes of Eubulides". About.com (New York Times). Retrieved 2012-09-04.
- St. Jerome, Homily on Psalm 115 (116B), translated by Sr. Marie Liguori Ewald, IHM, in The Homilies of Saint Jerome, Volume I (1-59 On the Psalms), The Fathers of the Church 48 (Washington, D.C.: The Catholic University of America Press, 1964), 294
- Ahmed Alwishah and David Sanson (2009). "The Early Arabic Liar:The Liar Paradox in the Islamic World from the Mid-Ninth to the Mid-Thirteenth Centuries CE". p. 1.
- Andrew Irvine, “Gaps, Gluts, and Paradox,” Canadian Journal of Philosophy, supplementary vol. 18 [Return of the A priori] (1992), 273-99
- Mills, Eugene (1998) ‘A simple solution to the Liar’, Philosophical Studies 89: 197-212.
- Lefebvre, N. and Schelein, M., "The Liar Lied," in Philosophy Now issue 51
- Barwise, J.; Etchemendy, J. (1989). The Liar: An Essay on Truth and Circularity. Oxford University Press, USA. p. 6. ISBN 9780195059441. LCCN 86031260.
- Kripke, Saul (1975). "An Outline of a Theory of Truth". Journal of Philosophy (72): 690–716.
- Jon Barwise and John Etchemendy (1987) The Liar. Oxford University Press.
- Yang, T. (Sep 2001). "Computational verb systems: The paradox of the liar". International Journal of Intelligent Systems (?) 16 (9): 1053–1067. doi:10.1002/int.1049.
- Crossley, J.N.; Ash, C.J.; Brickhill, C.J.; Stillwell, J.C.; Williams, N.H. (1972). What is mathematical logic?. London-Oxford-New York: Oxford University Press. pp. 52–53. ISBN 0-19-888087-1. Zbl 0251.02001.
||This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (September 2008)|
- Greenough, P.M., (2001) " ," American Philosophical Quarterly 38:
- Hughes, G.E., (1992) John Buridan on Self-Reference : Chapter Eight of Buridan's Sophismata, with a Translation, and Introduction, and a Philosophical Commentary, Cambridge Univ. Press, ISBN 0-521-28864-9. Buridan's detailed solution to a number of such paradoxes.
- Kirkham, Richard (1992) Theories of Truth. MIT Press. Especially chapter 9.
- Saul Kripke (1975) "An Outline of a Theory of Truth," Journal of Philosophy 72: 690-716.
- Lefebvre, Neil, and Schelein, Melissa (2005) "The Liar Lied," Philosophy Now issue 51.
- Graham Priest (1984) "The Logic of Paradox Revisited," Journal of Philosophical Logic 13: 153-179.
- A. N. Prior (1976) Papers in Logic and Ethics. Duckworth.
- Smullyan, Raymond (19nn) What is the Name of this Book?. ISBN 0-671-62832-1. A collection of logic puzzles exploring this theme.
- Portal 2: Chapter 7 The reunion (2011) Valve Corporation
- Liar Paradox entry by Bradley Dowden in the Internet Encyclopedia of Philosophy
- Liar Paradox entry by J C Beall and Michael Glanzberg in the Stanford Encyclopedia of Philosophy