Definable real number
A real number a is first-order definable in the language of set theory, without parameters, if there is a formula φ in the language of set theory, with one free variable, such that a is the unique real number such that φ(a) holds in the standard model of set theory (see Kunen 1980:153).
For the purposes of this article, such reals will be called simply definable numbers. This should not be understood to be standard terminology.
Note that this definition cannot be expressed in the language of set theory itself.
Assuming they form a set, the definable numbers form a field containing all the familiar real numbers such as 0, 1, π, e, et cetera. In particular, this field contains all the numbers named in the mathematical constants article, and all algebraic numbers (and therefore all rational numbers). However, most real numbers are not definable: the set of all definable numbers is countably infinite (because the set of all logical formulas is) while the set of real numbers is uncountably infinite (see Cantor's diagonal argument). As a result, most real numbers have no description (in the same sense of "most" as 'most real numbers are not rational').
The field of definable numbers is not complete; there exist convergent sequences of definable numbers whose limit is not definable (since every real number is the limit of a sequence of rational numbers). However, if the sequence itself is definable in the sense that we can specify a single formula for all its terms, then its limit will necessarily be a definable number.
While every computable number is definable, the converse is not true: the numeric representations of the Halting problem, Chaitin's constant, the truth set of first order arithmetic, and 0# are examples of numbers that are definable but not computable. Many other such numbers are known.
One may also wish to talk about definable complex numbers: complex numbers which are uniquely defined by a logical formula. However, whether this is possible depends on how the field of complex numbers is derived in the first place: it may not be possible to distinguish a complex number from its conjugate (say, 3+i from 3-i), since it is impossible to find a property of one that is not also a property of the other, without falling back on the underlying set-theoretic definition. Assuming we can define at least one nonreal complex number, however, a complex number is definable if and only if both its real part and its imaginary part are definable. The definable complex numbers also form a field if they form a set.
The related concept of "standard" numbers, which can only be defined within a finite time and space, is used to motivate axiomatic internal set theory, and provide a workable formulation for illimited and infinitesimal number. Definitions of the hyper-real line within non-standard analysis (the subject area dealing with such numbers) overwhelmingly include the usual, uncountable set of real numbers as a subset.
Notion does not exhaust "unambiguously described" numbers
Not every number that we would informally say has been unambiguously described, is definable in the above sense. For example, if we can enumerate all such definable numbers by the Gödel numbers of their defining formulas then we can use Cantor's diagonal argument to find a particular real that is not first-order definable in the same language. The argument can be made as follows:
Suppose that in a mathematical language L, it is possible to enumerate all of the defined numbers in L. Let this enumeration be defined by the function G: W → R, where G(n) is the real number described by the nth description in the sequence. Using the diagonal argument, it is possible to define a real number x, which is not equal to G(n) for any n. This means that there is a language L' that defines x, which is undefinable in L.
Other notions of definability
The notion of definability treated in this article has been chosen primarily for definiteness, not on the grounds that it's more useful or interesting than other notions. Here we treat a few others:
Definability in other languages or structures
Language of arithmetic
The language of arithmetic has symbols for 0, 1, the successor operation, addition, and multiplication, intended to be interpreted in the usual way over the natural numbers. Since no variables of this language range over the real numbers, we cannot simply copy the earlier definition of definability. Rather, we say that a real a is definable in the language of arithmetic (or arithmetical) if its Dedekind cut can be defined as a predicate in that language; that is, if there is a first-order formula φ in the language of arithmetic, with two free variables, such that
2nd-order language of arithmetic
The second-order language of arithmetic is the same as the first-order language, except that variables and quantifiers are allowed to range over sets of naturals. A real that is second-order definable in the language of arithmetic is called analytical.
Definability with ordinal parameters
Sometimes it is of interest to consider definability with parameters; that is, to give a definition relative to another object that remains undefined. For example, a real a (or for that matter, any set a) is called ordinal definable if there is a first-order formula φ in the language of set theory, with two free variables, and an ordinal γ, such that a is the unique object such that φ(a,γ) holds (in V).
The other sorts of definability thus far considered have only countably many defining formulas, and therefore allow only countably many definable reals. This is not true for ordinal definability, because an ordinal definable real is defined not only by the formula φ, but also by the ordinal γ. In fact it is consistent with ZFC that all reals are ordinal-definable, and therefore that there are uncountably many ordinal-definable reals. However it is also consistent with ZFC that there are only countably many ordinal-definable reals.
- Kunen, Kenneth (1980), Set Theory: An Introduction to Independence Proofs, Amsterdam: North-Holland, ISBN 978-0-444-85401-8
- Alan Turing, "On Computable Numbers, With An Application to the Entscheidungsproblem", Proceedings of the London Mathematical Society, 1936 (Turing's original paper distinguishing computable and definable numbers)