Jump to content

First-order logic

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Danman3459 (talk | contribs) at 15:59, 21 September 2007 (→‎Why is first-order logic needed?: Fixing Quantifier range with parens). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

First-order logic (FOL) is a formal deductive system used by mathematicians, philosophers, linguists, and computer scientists. It goes by many names, including: first-order predicate calculus (FOPC), the lower predicate calculus, the language of first-order logic or predicate logic. Unlike natural languages such as English, FOL uses a wholly unambiguous formal language interpreted by mathematical structures. FOL is a system of deduction extending propositional logic by allowing quantification over individuals of a given domain (universe) of discourse. For example, it can be stated in FOL "Every individual has the property P".

While propositional logic deals with simple declarative propositions, first-order logic additionally covers predicates and quantification. Take for example the following sentences: "Socrates is a man", "Plato is a man". In propositional logic these will be two unrelated propositions, denoted for example by p and q. In first-order logic however, both sentences would be connected by the same property: Man(x), where Man(x) means that x is a man. When x=Socrates we get the first proposition - p, and when x=Plato we get the second proposition - q. Such a construction allows for a much more powerful logic when quantifiers are introduced, such as "for every x..." - for example, "for every x, if Man(x), then...". Without quantifiers, every valid argument in FOL is valid in propositional logic, and vice versa.

The language of first-order logic has sufficient expressive power for the formalization of most of mathematics (see second-order logic for comparison). A first-order theory consists of a set of axioms (usually finite or recursively enumerable) and the statements deducible from them. The popular set theory ZFC is an example of a first-order theory, and it is generally accepted that all of classical mathematics can be formalized in ZFC. There are other theories that are commonly formalized independently in first-order logic such as Peano arithmetic.

Why is first-order logic needed?

Propositional logic is not adequate for formalizing valid arguments that rely on the internal structure of the propositions involved. To see this, consider the valid syllogistic argument:

  • All men are mortal
  • Socrates is a man
  • Therefore, Socrates is mortal

which upon translation into propositional logic yields:

  • A
  • B
  • C

(taking to mean "therefore").

This translation is invalid: logic validates arguments according to their structure, and nothing in the structure of this translated argument (C follows from A and B, for arbitrary A, B, C) suggests that it is valid. A translation that preserves the intuitive (and formal) validity of the argument must take into consideration the deeper structure of propositions, such as the essential notions of predication and quantification. Propositional logic deals only with truth-functional validity: any assignment of truth-values to the variables of the argument should make either the conclusion true or at least one of the premises false. Clearly we may (uniformly) assign truth values to the variables of the above argument such that A, B are both true but C is false. Hence the argument is truth-functionally invalid. On the other hand, it is impossible to (uniformly) assign truth values to the argument "A follows from (A and B)" such that (A and B) is true (hence A is true and B is true) and A false.

In contrast, this argument can be easily translated into first-order logic:

Where "" means "for all x", "" means "implies", means "Socrates is a man", and means "Socrates is mortal".

Defining first-order logic

A predicate calculus consists of

The axioms considered here are logical axioms which are part of classical FOL. It is important to note that FOL can be formalized in many equivalent ways; there is nothing canonical about the axioms and rules of inference given in this article. There are infinitely many equivalent formalizations all of which yield the same theorems and non-theorems, and all of which have equal right to the title 'FOL'.

FOL is used as the basic "building block" for many mathematical theories. FOL provides several built-in rules, such as the axiom (if P(x) is true for every x then P(x) is true for every x). Additional non-logical axioms are added to produce specific first-order theories based on the axioms of classical FOL; these theories built on FOL are called classical first-order theories. One example of a classical first-order theory is Peano arithmetic, which adds to axiom (i.e. for every x there exists y such that y=x+1, where Q(x,y) is interpreted as "y=x+1"). This additional axiom is a non-logical axiom; it is not part of FOL, but instead is an axiom of the theory (an axiom of arithmetic rather than of logic). Axioms of the latter kind are also called axioms of first-order theories. The axioms of first-order theories are not regarded as truths of logic per se, but rather as truths of the particular theory that usually has associated with it an intended interpretation of its non-logical symbols. (See an analogous idea at logical versus non-logical symbols.) Thus, the axiom about Q(x,y) is true only with the interpretation of the relation Q(x,y) as "y=x+1", and only in the theory of Peano arithmetic. Classical FOL does not have associated with it an intended interpretation of its non-logical vocabulary (except arguably a symbol denoting identity, depending on whether one regards such a symbol as logical). Classical set-theory is another example of a first-order theory (a theory built on FOL).

Vocabulary

Before setting up the formation rules, one has to describe the "vocabulary", which is composed of

  1. A set of predicate variables (or relations) each with some valence (or arity, number of its arguments) ≥1, which are often denoted by uppercase letters P, Q, R,... .
    • For example, P(x) is a predicate variables of valence 1. It can stand for "x is a man", for example.
    • Q(x,y) is a predicate variables of valence 2. It can stand for "x is greater than y" in arithmetic or "x is the father of y", for example.
    • It is possible to allow relations of valence 0; these could be considered as propositional variables. For example, P, which can stand for any statement.
    • By using functions (see below), it is possible to dispense with all predicate variables with valence larger than one. For example, "x>y" (a predicate of valence 2, of the type Q(x,y))can be replaced by a predicate of valence 1 about the ordered pair (x,y).
  2. A set of constants, often denoted by lowercase letters at the beginning of the alphabet a, b, c,... .
  3. A set of functions, each of some valence ≥ 1, which are often denoted by lowercase letters f, g, h,... .
    • Examples: f(x) may stand for "the father of x". In arithmetic, it may stand for "-x". In set theory, it may stand for "the power set of x". In arithmetic, f(x,y) may stand for "x+y". In set theory, it may stand for "the union of x and y".
    • A constant is really a function of valence 0. However it is traditional to use the term "function" only for functions of valence at least 1.
    • One can in principle dispense entirely with functions of arity > 2 and predicates of arity > 1 if there is a function symbol of arity 2 representing an ordered pair (or predicate symbols of arity 2 representing the projection relations of an ordered pair). The pair or projections need to satisfy the natural axioms.
    • One can in principle dispense entirely with functions and constants. For example, instead of using a constant 0 one may use a predicate 0(x) (interpreted as "x=0"), and replace every predicate such as P(0,y) with x 0(x) P(x,y). A function such as f(x1,x2...) will similarly be replaced by a predicate F(x1,x2...,y) (interpreted as "y=f(x1,x2...)").
  4. An infinite set of variables, often denoted by lowercase letters at the end of the alphabet x, y, z,... .
  5. Symbols denoting logical operators (or connectives): (logical not), (logical conditional).
    • ψ) is logically equivalent to φ ψ (logical and). φ ψ can be seen as a shorthand for this. Alternatively, one may add the symbol as a logical operator to the vocabulary, and appropriate axioms.
    • φ ψ is logically equivalent to φ ψ (logical or). φ ψ can be seen as a shorthand for this. Alternatively, one may add the symbol as a logical operator to the vocabulary, and appropriate axioms.
    • Similarly, (φψ)φ) is logically equivalent to φ ψ (logical biconditional), and one may use the latter as a shorthand for this, or alternatively add this to the vocabulary and add appropriate axioms. Sometimes φ ψ is written as φ ψ.
  6. Symbols denoting quantifiers: (universal quantification).
    • xφ) is logically equivalent to x φ (existential quantification). The latter can either be used as a shorthand for this, or added to the vocabulary together with appropriate axioms.
  7. Left and right parenthesis.
    • There are many different conventions about where to put parentheses; for example, one might write x or (x). Sometimes one uses colons or full stops instead of parentheses to make formulas unambiguous. One interesting but rather unusual convention is "Polish notation", where one omits all parentheses, and writes , , and so on in front of their arguments rather than between them. Polish notation is compact and elegant, but rare because it is hard for humans to read it.
  8. An identity or equality symbol = is sometimes but not always included in the vocabulary.
    • If equality is considered to be a part of first order logic, then the equality symbol behaves syntactically as a binary predicate. This case is sometimes called first order logic with equality.

There are several further minor variations listed below:

  • The set of primitive symbols (operators and quantifiers) often varies. It is possible to include other operators as primitive, such as (iff), (and) and (or), the truth constants T for "true" and F for "false" (these are operators of valence 0), and/or the Sheffer stroke (P | Q, aka NAND). The minimum number of primitive symbols needed is one, but if we restrict ourselves to the operators listed above, we need three, as above.
  • Some books and papers use the notation φ ψ for φ ψ. This is especially common in proof theory where is easily confused with the sequent arrow. One also sees ~φ for φ, φ & ψ for φ ψ, and a wealth of notations for quantifiers; e.g., x φ may be written as (x)φ. This latter notation is common in texts on recursion theory.
  • It is often easier in practice to use a simpler notation that supports infix operators and omits unnecessary parentheses. Thus if P is a relation of valence 2, we often write "a P b" instead of "P a b"; for example, we write 1 < 2 instead of <(1 2). Similarly if f is a function of valence 2, we sometimes write "a f b" instead of "f(a b)"; for example, we write 1 + 2 instead of +(1 2). By convention, infix operators tend to use non-alphabetic function names. It is also common to omit some parentheses if this does not lead to ambiguity (leading to defining a precedence).
  • Sometimes it is useful to say that "P(x) holds for exactly one x", which can be expressed as x P(x). This notation may be taken to abbreviate a formula such as x (P(x) y (P(y) (x = y))) .

Computer programs that accept first-order logic representations will typically accept at least these quantifiers and operators (though they may use different symbols to represent them): (forall), (exists), (not), (and), (or), (implies), and (if and only if).

The sets of constants, functions, and relations are usually considered to form a language, while the variables, logical operators, and quantifiers are usually considered to belong to the logic. For example, the language of group theory consists of one constant (the identity element), one function of valence 1 (the inverse) one function of valence 2 (the product), and one relation of valence 2 (equality), which would be omitted by authors who include equality in the underlying logic.

Formation rules

The formation rules define the terms, formulas and free variables in them as follows.

Terms

The set of terms is recursively defined by the following rules:

  1. Any constant is a term.
  2. Any variable is a term.
  3. Any expression f(t1,...,tn) of n ≥ 1 arguments (where each argument ti is a term and f is a function symbol of valence n) is a term.
  4. Closure clause: Nothing else is a term. For example, predicates are not terms.

Formulas

The set of well-formed formulas (usually called wffs or just formulas) is recursively defined by the following rules:

  1. Simple and complex predicates If P is a relation of valence n ≥ 1 and the ai are terms then P(a1,...,an) is well-formed. If equality is considered part of logic, then (a1 = a2) is well formed. All such formulas are said to be atomic.
  2. Inductive Clause I: If φ is a wff, then φ is a wff.
  3. Inductive Clause II: If φ and ψ are wffs, then (φ ψ) is a wff.
  4. Inductive Clause III: If φ is a wff and x is a variable, then x φ is a wff.
  5. Closure Clause: Nothing else is a wff.

For example, x y (P(f(x)) (P(x) Q(f(y),x,z))) is a well-formed formula, if f is a function of valence 1, P a predicate of valence 1 and Q a predicate of valence 3. x x is not a well-formed formula.

In Computer Science terminology, a formula implements a built-in "boolean" type, while a term implements all other types.

Free and Bound Variables

  1. Atomic formulas If φ is an Atomic formula then x is free in φ if and only if x occurs in φ.
  2. Inductive Clause I: x is free in φ if and only if x is free in φ.
  3. Inductive Clause II: x is free in (φ ψ) if and only if x is free in φ and does not occur in ψ, or x is free in ψ and does not occur in φ, or x is free in both φ and ψ.
  4. Inductive Clause III: x is free in y φ if and only if x is free in φ and x is a different symbol than y.
  5. Closure Clause: x is bound in φ if and only if x occurs in φ and x is not free in φ.
For example, in x y (P(x) Q(x,f(x),z)), x and y are bound variables, z is a free variable, and w is neither because it does not occur in the formula.

Example

In mathematics the language of ordered abelian groups has one constant 0, one unary function −, one binary function +, and one binary relation ≤. So:

  • 0, x, y are atomic terms
  • +(x, y), +(x, +(y, −(z))) are terms, usually written as x + y, x + yz
  • =(+(x, y), 0), ≤(+(x, +(y, −(z))), +(x, y)) are atomic formulas, usually written as x + y = 0, x + y - zx + y,
  • (x y ≤( +(x, y), z)) (x =(+(x, y), 0)) is a formula, usually written as (x y x + yz) (x x + y = 0).

Substitution

If t is a term and φ(x) is a formula possibly containing x as a free variable, then v φ(t) is defined to be the result of replacing all free instances of x by t, provided that no free variable of t becomes bound in this process. If some free variable of t becomes bound, then to substitute t for x it is first necessary to change the names of bound variables of φ to something other than the free variables of t.

To see why this condition is necessary, consider the formula φ(x) given by y yx ("x is maximal"). If t is a term without y as a free variable, then φ(t) just means t is maximal. However if t is y the formula φ(y) is y yy which does not say that y is maximal. The problem is that the free variable y of t (=y) became bound when we substituted y for x in φ(x). So to form φ(y) we must first change the bound variable y of φ to something else, say z, so that φ(y) is then z zy. Forgetting this condition is a notorious cause of errors.

Inference rules

An inference rule relates a fixed number of well-formed formulas, called premises, to one well-formed formula, called conclusion.

Inference rules are used to prove theorems, which are sentences provable in a theory. If the premises of an inference rule are theorems, then its conclusion is a theorem as well. In other words, inference rules are used to generate "new" theorems from "old" ones. The procedure of generating theorems is known as predicate calculus and is described in the relevant section below.

The inference rule modus ponens is the only one required from propositional logic for the formalization given here. It states that if φ and φ ψ are both theorems, then ψ is a theorem. This can be written as following;

if and , then

Where is a sign that the formula following it is a theorem.


The inference rule called Universal Generalization is characteristic of the predicate calculus. It can be stated as

if , then

Which reads: if φ is a theorem, then "for every x, φ" is a theorem as well.

Notice that the similar-looking formula is NOT a theorem and is a WRONG statement, unless x does not occur in φ (see Generalization (logic)).

Generalization is analogous to the Necessitation Rule of modal logic, which is

if , then

Which reads: if P is a theorem, then "P is necessary" is a theorem.

Axioms

Here follows a description of the axioms of first-order logic. As explained above, a given first-order theory has further, non-logical axioms. The following logical axioms characterize a predicate calculus for this article's example of first-order logic[1].

For any theory, it is of interest to know whether the set of axioms can be generated by an algorithm, or if there is an algorithm which determines whether a well-formed formula is an axiom.

If there is an algorithm to generate all axioms, then the set of axioms is said to be recursively enumerable.

If there is an algorithm which determines after a finite number of steps whether a formula is an axiom or not, then the set of axioms is said to be recursive or decidable. In that case, one may also construct an algorithm to generate all axioms: this algorithm simply builds all possible formulas one by one (with growing length), and for each formula the algorithm determines whether it is an axiom.

Axioms of first-order logic are always decidable. However, in a first-order theory non-logical axioms are not necessarily such.

Quantifier axioms

Quantifier axioms change according to how the vocabulary is defined, how the substitution procedure works, what are the formation rules and which inference rules are used. Here follows a specific example of these axioms

  • PRED-1: (x Z(x)) Z(t)
  • PRED-2: Z(t) (x Z(x))
  • PRED-3: (x (W Z(x))) (W x Z(x))
  • PRED-4: (x (Z(x) W)) (x Z(x) W)

These are actually axiom schemata: the expression W stands for any wff in which x is not free, and the expression Z(x) stands for any wff with the additional convention that Z(t) stands for the result of substitution of the term t for x in Z(x). Thus this is a recursive set of axioms.

Another axiom, Z x Z, for Z in which x does not occur, is sometimes added.

Equality and its axioms

There are several different conventions for using equality (or identity) in first order logic. This section summarizes the main ones. The various conventions all give essentially the same results with about the same amount of work, and differ mainly in terminology.

  • The most common convention for equality is to include the equality symbol as a primitive logical symbol, and add the axioms for equality to the axioms for first order logic. The equality axioms are
x = x
x = yf(...,x,...) = f(...,y,...) for any function f
x = y → (P(...,x,...) → P(...,y,...)) for any relation P (including the equality relation itself)
These are, too, axiom schemata: they define an algorithm which decides whether a given formula is an axiom. Thus this is a recursive set of axioms.
  • The next most common convention is to include the equality symbol as one of the relations of a theory, and add the equality axioms to the axioms of the theory. In practice this is almost indistinguishable from the previous convention, except in the unusual case of theories with no notion of equality. The axioms are the same, and the only difference is whether one calls some of them logical axioms or axioms of the theory.
  • In theories with no functions and a finite number of relations, it is possible to define equality in terms of the relations, by defining the two terms s and t to be equal if any relation is unchanged by changing s to t in any argument.
For example, in set theory with one relation , we may define s = t to be an abbreviation for x (s x t x) x (x s x t). This definition of equality then automatically satisfies the axioms for equality.
Alternatively, if one does use the equality symbol as a relation of the theory or of logic, then one would have to add axioms. In the previous example, one would have to add the axiom s t (x (x s x t)) s = t.
  • In some theories it is possible to give ad hoc definitions of equality. For example, in a theory of partial orders with one relation ≤ we could define s = t to be an abbreviation for st ts.

Predicate calculus

The predicate calculus is an extension of the propositional calculus that defines which statements of first order logic are provable. It is a formal system used to describe mathematical theories. If the propositional calculus is defined with a suitable set of axioms and the single rule of inference modus ponens (this can be done in many different ways), then the predicate calculus can be defined by appending some additional axioms and the additional inference rule "universal generalization". More precisely, as axioms for the predicate calculus we take:

  • All tautologies from the propositional calculus (with the proposition variables replaced by formulas).
  • The axioms for quantifiers, given above.
  • The axioms for equality given above, if equality is regarded as a logical concept.

A sentence is defined to be provable in first order logic if it can be obtained by starting with the axioms of the predicate calculus and repeatedly applying the inference rules "modus ponens" and "universal generalization". Thus:

  • An axiom of the predicate calculus is provable in first order logic.
  • If the premises of an inference rule are provable in first order logic, then so is its conclusion.

If we have a theory T (a set of statements, called axioms, in some language) then a sentence φ is defined to be provable in the theory T if a b ... φ is provable in first order logic, for some finite set of axioms a, b,... of the theory T. In other words, if one can prove in first order logic that φ follows from the axioms of T. This also means, that we replace the above procedure for finding provable sentences by the following one:

  • An axiom of T is provable in T.
  • An axiom of the predicate calculus is provable in T.
  • If the premises of an inference rule are provable in T, then so is its conclusion.

One apparent problem with this definition of provability is that it seems rather ad hoc: we have taken some apparently random collection of axioms and rules of inference, and it is unclear that we have not accidentally missed out some vital axiom or rule. Gödel's completeness theorem assures us that this is not really a problem: the theorem states that any statement true in all models is provable in first order logic. In particular, any reasonable definition of "provable" in first order logic must be equivalent to the one above (though it is possible for the lengths of proofs to differ vastly for different definitions of provability).

There are many different (but equivalent) ways to define provability. The definition above is a typical example of a "Hilbert style" calculus, which has a lot of different axioms but very few rules of inference. The "Gentzen style" predicate calculus differs in that it has very few axioms but many rules of inference.

Provable identities

The following identities are examples to sentences provable in first order logic

(where the variable must not occur free in )
(where the variable must not occur free in )

Possible additional inference rules

The following set of inference rules is an example to such a set that can be added to the two inference rules discussed above

(If c is a variable, then it must not already be quantified somewhere in P(x))
(x must not appear free in P(c))

Metalogical theorems of first-order logic

Some important metalogical theorems are listed below in bulleted form. What they roughly mean is that a sentence is valid if and only if it is provable. Furthermore, one can construct an algorithm which works as follows: if a sentence is provable, the algorithm will tell us that in a finite but unknown amount of time. If a sentence is unprovable, the algorithm will run forever, and we will not know whether the sentence is unprovable or provable and the algorithm has just not yet told us that. Such an algorithm is called semidecidable.

One may construct an algorithm which will determine in finite number of steps whether a sentence is provable (a decidable algorithm) only for simple classes of first-order logic.

  1. The decision problem for validity is semidecidable; in other words, there is a Turing machine that when given any sentence as input, will halt if and only if the sentence is valid (true in all models).
    • As Gödel's completeness theorem shows, for any valid formula P, P is provable. Conversely, assuming consistency of the logic, any provable formula is valid.
    • The Turing machine can be one which generates all provable formulas in the following manner: for a finite or recursively enumerable set of axioms, such a machine can be one that generates an axiom, then generates a new provable formula by application of axioms and inference rules already generated, then generate another axiom, and so on. Given a sentence as input, the Turing machine simply go on and generates all provable formulas one by one, and will halt if it generates the sentence.
  2. Unlike the propositional logic, first-order logic is undecidable, provided that the language has at least one predicate of valence at least 2 other than equality. This means that there is no decision procedure that correctly determines whether an arbitrary formula is valid. Because there is a Turing machine as described above, the undecidability is related to the unsolvability of the Halting problem: there is no algorithm which determines after a finite number of steps whether the Turing machine will ever halt for a given sentence as its input, hence whether the sentence is provable. This result was established independently by Church and Turing.
  3. Monadic predicate logic (i.e., predicate logic with only predicates of one argument and no functions) is decidable.
  4. The Bernays–Schönfinkel class of first order formulas is also decidable.

Metalogical theorems of first-order theories

It is important to bear in mind that the results quoted above apply only to first-order logic, and not to first-order theories in general. The latter include not only logical axioms, but also non-logical ones, and theorems such as Gödel's completeness theorem do not apply to them. In many first-order theories, and in particular every first-order theory that includes the natural numbers, Gödel's incompleteness theorems show, for example, that there are true statements which cannot be proven.

Translating natural language to first-order logic

Concepts expressed in natural language must be "translated" to first-order logic (FOL) before FOL can be used to address them, and there are a number of potential pitfalls in this translation. In FOL, means "p, or q, or both", that is, it is inclusive. In English, the word "or" is sometimes inclusive (e.g, "cream or sugar?"), but sometimes it is exclusive (e.g., "coffee or tea?" is usually intended to mean one or the other, not both). Similarly, the English word "some" may mean "at least one, possibly all", but other times it may mean "not all, possibly none". The English word "and" should sometimes be translated as "or" (e.g., "men and women may apply"). [2]

Limitations of first-order logic

All mathematical notations have their strengths and weaknesses; here are a few such issues with first-order logic.

Difficulty representing if-then-else

Oddly enough, FOL with equality (as typically defined) does not include or permit defining an if-then-else predicate or function if(c,a,b), where "c" is a condition expressed as a formula, while a and b are either both terms or both formulas, and its result would be "a" if c is true, and "b" if it is false. The problem is that in FOL, both predicates and functions can only accept terms ("non-booleans") as parameters, but the "obvious" representation of the condition is a formula ("boolean"). This is unfortunate, since many mathematical functions are conveniently expressed in terms of if-then-else, and if-then-else is fundamental for describing most computer programs.

Mathematically, it is possible to redefine a complete set of new functions that match the formula operators, but this is quite clumsy.[3] A predicate ite(c,a,b) can be expressed in FOL if rewritten as , but this is clumsy if the condition c is complex. Some sources extend FOL to add a special-case predicate named "ite(condition, a, b)" (where a and b are formulas) and/or function "if(condition, a, b)" (where a and b are terms), both of which accept a formula as the condition, and are equal to "a" if condition is true and "b" if it is false.[4] Others extend FOL further so that functions and predicates can accept both terms and formulas at any position.

Typing (Sorts)

FOL does not include types (sorts) into the notation itself, other than the difference between formulas ("booleans") and terms ("non-booleans"). Some argue that this lack of types is a great advantage [5], but many others find advantages in defining and using types (sorts). Those who wish to indicate types must provide such information using the notation available in FOL. Doing so can make such expressions more complex, and can also be easy to get wrong.

Single-parameter predicates can be used to implement the notion of types where appropriate. For example, in: , the predicate Man(x) could be considered a kind of "type assertion" (that is, that x must be a Man). Predicates can also be used with the "exists" quantifier to identify types, but this should usually be done with the "and" operator instead, e.g.: ("there exists something that is both a man and is mortal"). It is easy to write , but this would be equivalent to ("there is something that is not a man, and/or there exists something that is mortal"), which is usually not what was intended. Similarly, assertions can be made that one type is a subtype of another type, e.g.: .

Graph reachability cannot be expressed

Many situations can be modeled as a graph of nodes and directed connections (edges). For example, validating many systems requires showing that a "bad" state cannot be reached from a "good" state, and these interconnections of states can often be modelled as a graph. However, it can be proved that reachability cannot be fully expressed in predicate logic. In other words, there is no predicate-logic formula f, with u and v as its only free variables, and R as its only predicate symbol (of arity 2) such that f holds in a directed graph iff there is a path in that graph from the node associated to u to the node associated with v. [6]

Comparison with other logics

  • Typed first order logic allows variables and terms to have various types (or sorts). If there are only a finite number of types, this does not really differ much from first order logic, because one can describe the types with a finite number of unary predicates and a few axioms. Sometimes there is a special type Ω of truth values, in which case formulas are just terms of type Ω.
  • Weak second-order logic allows quantification over finite subsets.
  • Monadic second-order logic allows quantification over subsets, or in other words over unary predicates.
  • Second-order logic allows quantification over subsets and relations, or in other words over all predicates. For example, the axiom of extensionality can be stated in second-order logic as x = ydef P (P(x) ↔ P(y)). The strong semantics of second order logic give such sentences a much stronger meaning than first-order semantics.
  • Higher order logics allows quantification over higher types than second-order logic permits. These higher types include relations between relations, functions from relations to relations between relations, etc.
  • Intuitionistic first order logic uses intuitionistic rather than classical propositional calculus; for example, ¬¬φ need not be equivalent to φ.
  • Modal logic has extra modal operators with meanings which can be characterised informally as, for example "it is necessary that φ" and "it is possible that φ".
  • In monadic predicate calculus predicates are restricted to having only one argument.
  • Infinitary logic allows infinitely long sentences. For example, one may allow a conjunction or disjunction of infinitely many formulas, or quantification over infinitely many variables. Infinitely long sentences arise in areas of mathematics including topology and model theory.
  • First order logic with extra quantifiers has new quantifiers Qx,..., with meanings such as "there are many x such that ...". Also see branched quantification and the plural quantifiers of George Boolos and others.
  • Independence-friendly logic is characterized by branching quantifiers, which allow one to express independence between quantified variables.

Most of these logics are in some sense extensions of first order logic: they include all the quantifiers and logical operators of first order logic with the same meanings. Lindström showed first order logic has no extensions (other than itself) that satisfy both the compactness theorem and the downward Löwenheim-Skolem theorem. A precise statement of Lindström's theorem requires listing several pages of technical conditions that the logic is assumed to satisfy; for example, changing the symbols of a language should make no essential difference to which sentences are true.

First order logic in which no atomic sentence lies in the scope of more than three quantifiers, has the same expressive power as the relation algebra of Tarski and Givant (1987). They also show that FOL with a primitive ordered pair, and a relation algebra including the two ordered pair projection relations, are equivalent.

References

  1. ^ For another well-worked example, see Metamath proof explorer
  2. ^ Suber, Peter, Translation Tips, retrieved 2007-09-20
  3. ^ Otter Example if.in, retrieved 2007-09-21
  4. ^ Manna, Zohar (1974), Mathematical Theory of Computation, McGraw-Hill Computer Science Series, New York, New York: McGraw-Hill Book Company, pp. 77–147, ISBN 0-07-039910-7
  5. ^ Leslie Lamport, Lawrence C. Paulson. Should Your Specification Language Be Typed? ACM Transactions on Programming Languages and Systems. 1998. http://citeseer.ist.psu.edu/71125.html
  6. ^ Huth, Michael; Ryan, Mark (2004), Logic in Computer Science, 2nd edition, pp. 138–139, ISBN 0-521-54310-X

See also