# Forcing (mathematics)

(Redirected from )

In the mathematical discipline of set theory, forcing is a technique discovered by Paul Cohen for proving consistency and independence results. It was first used, in 1963, to prove the independence of the axiom of choice and the continuum hypothesis from Zermelo–Fraenkel set theory. Forcing was considerably reworked and simplified in the following years, and has since served as a powerful technique, both in set theory and in areas of mathematical logic such as recursion theory.

Descriptive set theory uses the notion of forcing from both recursion theory and set theory. Forcing has also been used in model theory, but it is common in model theory to define genericity directly without mention of forcing.

## Intuition

Intuitively, forcing consists of expanding the set theoretical universe ${\displaystyle V}$ to a larger universe ${\displaystyle V^{*}}$. In this bigger universe, for example, one might have lots of new subsets of ${\displaystyle \omega }$ ${\displaystyle =\{0,1,2,\ldots \}}$ that were not there in the old universe, and thereby violate the continuum hypothesis. While impossible on the face of it, this is just another version of Cantor's paradox about infinity. In principle, one could consider

${\displaystyle V^{*}=V\times \{0,1\}}$,

identify ${\displaystyle x\in V}$ with ${\displaystyle (x,0)}$, and then introduce an expanded membership relation involving “new” sets of the form ${\displaystyle (x,1)}$. Forcing is a more elaborate version of this idea, reducing the expansion to the existence of one new set, and allowing for fine control over the properties of the expanded universe.

Cohen’s original technique, now called ramified forcing, is slightly different from the unramified forcing expounded here. Forcing is also equivalent to the method of Boolean-valued models, which some feel is conceptually more natural and intuitive, but usually much more difficult to apply.

## Forcing posets

A forcing poset is an ordered triple, ${\displaystyle (\mathbb {P} ,\leq ,\mathbf {1} )}$, where ${\displaystyle \leq }$ is a preorder on ${\displaystyle \mathbb {P} }$ that is atomless, meaning that it satisfies the following condition:

• For each ${\displaystyle p\in \mathbb {P} }$, there are ${\displaystyle q,r\in \mathbb {P} }$ such that ${\displaystyle q,r\leq p}$, with no ${\displaystyle s\in \mathbb {P} }$ such that ${\displaystyle s\leq q,r}$. The largest element of ${\displaystyle \mathbb {P} }$ is ${\displaystyle \mathbf {1} }$, that is, ${\displaystyle p\leq \mathbf {1} }$ for all ${\displaystyle p\in \mathbb {P} }$. Members of ${\displaystyle \mathbb {P} }$ are called forcing conditions or just conditions. One reads ${\displaystyle p\leq q}$ as ${\displaystyle p}$ is stronger than ${\displaystyle q}$. Intuitively, the “smaller” condition provides “more” information, just as the smaller interval ${\displaystyle [3.1415926,3.1415927]}$ provides more information about the number ${\displaystyle \pi }$ than the interval ${\displaystyle [3.1,3.2]}$ does.

There are various conventions in use. Some authors require ${\displaystyle \leq }$ to also be antisymmetric, so that the relation is a partial order. Some use the term partial order anyway, conflicting with standard terminology, while some use the term preorder. The largest element can be dispensed with. The reverse ordering is also used, most notably by Saharon Shelah and his co-authors.

### ${\displaystyle \mathbb {P} }$-names

Associated with a forcing poset ${\displaystyle \mathbb {P} }$ is the class ${\displaystyle V^{(\mathbb {P} )}}$ of ${\displaystyle \mathbb {P} }$-names. A ${\displaystyle \mathbb {P} }$-name is a set ${\displaystyle A}$ of the form

${\displaystyle A\subseteq \{(u,p)\mid u~{\text{is a}}~\mathbb {P} {\text{-name and}}~p\in \mathbb {P} \}.}$

This is actually a definition by transfinite recursion. More precisely, one first uses transfinite recursion to define the following hierarchy:

{\displaystyle {\begin{aligned}\operatorname {Name} (\varnothing )&=\varnothing ;\\\operatorname {Name} (\alpha +1)&={\mathcal {P}}(\operatorname {Name} (\alpha )\times \mathbb {P} ),~{\text{where}}~{\mathcal {P}}~{\text{denotes the power-set operator}};\\\operatorname {Name} (\lambda )&=\bigcup \{\operatorname {Name} (\alpha )\mid \alpha <\lambda \},~{\text{if}}~\lambda ~{\text{is a limit ordinal}}.\end{aligned}}}

Then the class of ${\displaystyle \mathbb {P} }$-names is defined as

${\displaystyle V^{(\mathbb {P} )}=\bigcup \{\operatorname {Name} (\alpha )~|~\alpha ~{\text{is an ordinal}}\}.}$

The ${\displaystyle \mathbb {P} }$-names are, in fact, an expansion of the universe. Given ${\displaystyle x\in V}$, one defines ${\displaystyle {\check {x}}}$ to be the ${\displaystyle \mathbb {P} }$-name

${\displaystyle {\check {x}}=\{({\check {y}},\mathbf {1} )\mid y\in x\}.}$

Again, this is really a definition by transfinite recursion.

### Interpretation

Given any subset ${\displaystyle G}$ of ${\displaystyle \mathbb {P} }$, one next defines the interpretation or valuation map from ${\displaystyle \mathbb {P} }$-names by

${\displaystyle \operatorname {val} (u,G)=\{\operatorname {val} (v,G)\mid \exists p\in G:~(v,p)\in u\}.}$

This is again a definition by transfinite recursion. Note that if ${\displaystyle \mathbf {1} \in G}$, then ${\displaystyle \operatorname {val} ({\check {x}},G)=x}$. One then defines

${\displaystyle {\underline {G}}=\{({\check {p}},p)\mid p\in \mathbb {P} \}}$

so that ${\displaystyle \operatorname {val} ({\underline {G}},G)=G}$.

### Example

A good example of a forcing poset is ${\displaystyle (\operatorname {Bor} (I),\subseteq ,I)}$, where ${\displaystyle I=[0,1]}$ and ${\displaystyle \operatorname {Bor} (I)}$ is the collection of Borel subsets of ${\displaystyle I}$ having non-zero Lebesgue measure. In this case, one can talk about the conditions as being probabilities, and a ${\displaystyle \operatorname {Bor} (I)}$-name assigns membership in a probabilistic sense. Due to the ready intuition this example can provide, probabilistic language is sometimes used with other forcing posets.

## Countable transitive models and generic filters

The key step in forcing is, given a ${\displaystyle {\mathsf {ZFC}}}$ universe ${\displaystyle V}$, to find an appropriate object ${\displaystyle G}$ not in ${\displaystyle V}$. The resulting class of all interpretations of ${\displaystyle \mathbb {P} }$-names will turn out to be a model of ${\displaystyle {\mathsf {ZFC}}}$ that properly extends the original ${\displaystyle V}$ (since ${\displaystyle G\notin V}$).

Instead of working with ${\displaystyle V}$, one considers a countable transitive model ${\displaystyle M}$ with ${\displaystyle (\mathbb {P} ,\leq ,\mathbf {1} )\in M}$. By “model”, we mean a model of set theory, either of all of ${\displaystyle {\mathsf {ZFC}}}$, or a model of a large but finite subset of ${\displaystyle {\mathsf {ZFC}}}$, or some variant thereof. “Transitivity” means that if ${\displaystyle x\in y\in M}$, then ${\displaystyle x\in M}$. The Mostowski Collapsing Theorem says that this can be assumed if the membership relation is well-founded. The effect of transitivity is that membership and other elementary notions can be handled intuitively. Countability of the model relies on the Löwenheim-Skolem Theorem.

As ${\displaystyle M}$ is a set, there are sets not in ${\displaystyle M}$ – this follows from Russell’s Paradox. The appropriate set ${\displaystyle G}$ to pick and adjoin to ${\displaystyle M}$ is a generic filter on ${\displaystyle \mathbb {P} }$. The “filter” condition means that

• ${\displaystyle G\subseteq \mathbb {P} }$;
• ${\displaystyle \mathbf {1} \in G}$;
• if ${\displaystyle p\geq q\in G}$, then ${\displaystyle p\in G}$;
• if ${\displaystyle p,q\in G}$, then there exists an ${\displaystyle r\in G}$ such that ${\displaystyle r\leq p,q}$.

For ${\displaystyle G}$ to be “generic” means:

• If ${\displaystyle D\in M}$ is a “dense” subset of ${\displaystyle \mathbb {P} }$ (that is, for each ${\displaystyle p\in \mathbb {P} }$, there exists a ${\displaystyle q\in D}$ such that ${\displaystyle q\leq p}$), then ${\displaystyle G\cap D\neq \varnothing }$.

The existence of a generic filter ${\displaystyle G}$ follows from the Rasiowa-Sikorski Lemma. In fact, slightly more is true: Given a condition ${\displaystyle p\in \mathbb {P} }$, one can find a generic filter ${\displaystyle G}$ such that ${\displaystyle p\in G}$. Due to the splitting condition, if ${\displaystyle G}$ is a filter, then ${\displaystyle \mathbb {P} \setminus G}$ is dense. If ${\displaystyle G\in M}$, then ${\displaystyle \mathbb {P} \setminus G\in M}$ because ${\displaystyle M}$ is a model of ${\displaystyle {\mathsf {ZFC}}}$. For this reason, a generic filter is never in ${\displaystyle M}$.

## Forcing

Given a generic filter ${\displaystyle G\subseteq \mathbb {P} }$, one proceeds as follows. The subclass of ${\displaystyle \mathbb {P} }$-names in ${\displaystyle M}$ is denoted ${\displaystyle M^{(\mathbb {P} )}}$. Let ${\displaystyle M[G]=\left\{\operatorname {val} (u,G)~{\Big |}~u\in M^{(\mathbb {P} )}\right\}}$. To reduce the study of the set theory of ${\displaystyle M[G]}$ to that of ${\displaystyle M}$, one works with the “forcing language”, which is built up like ordinary first-order logic, with membership as the binary relation and all the ${\displaystyle \mathbb {P} }$-names as constants.

Define ${\displaystyle p\Vdash _{M,\mathbb {P} }\varphi (u_{1},\ldots ,u_{n})}$ (to be read as “${\displaystyle p}$ forces ${\displaystyle \varphi }$ in the model ${\displaystyle M}$ with poset ${\displaystyle \mathbb {P} }$”), where ${\displaystyle p}$ is a condition, ${\displaystyle \varphi }$ is a formula in the forcing language, and the ${\displaystyle u_{i}}$’s are ${\displaystyle \mathbb {P} }$-names, to mean that if ${\displaystyle G}$ is a generic filter containing ${\displaystyle p}$, then ${\displaystyle M[G]\models \varphi (\operatorname {val} (u_{1},G),\ldots ,\operatorname {val} (u_{n},G))}$. The special case ${\displaystyle \mathbf {1} \Vdash _{M,\mathbb {P} }\varphi }$ is often written as “${\displaystyle \mathbb {P} \Vdash _{M,\mathbb {P} }\varphi }$” or simply “${\displaystyle \Vdash _{M,\mathbb {P} }\varphi }$”. Such statements are true in ${\displaystyle M[G]}$, no matter what ${\displaystyle G}$ is.

What is important is that this external definition of the forcing relation ${\displaystyle p\Vdash _{M,\mathbb {P} }\varphi }$ is equivalent to an internal definition within ${\displaystyle M}$, defined by transfinite induction over the ${\displaystyle \mathbb {P} }$-names on instances of ${\displaystyle u\in v}$ and ${\displaystyle u=v}$, and then by ordinary induction over the complexity of formulas. This has the effect that all the properties of ${\displaystyle M[G]}$ are really properties of ${\displaystyle M}$, and the verification of ${\displaystyle {\mathsf {ZFC}}}$ in ${\displaystyle M[G]}$ becomes straightforward. This is usually summarized as the following three key properties:

• Truth: ${\displaystyle M[G]\models \varphi (\operatorname {val} (u_{1},G),\ldots ,\operatorname {val} (u_{n},G))}$ if and only if it is forced by ${\displaystyle G}$, that is, for some condition ${\displaystyle p\in G}$, we have ${\displaystyle p\Vdash _{M,\mathbb {P} }\varphi (u_{1},\ldots ,u_{n})}$.
• Definability: The statement “${\displaystyle p\Vdash _{M,\mathbb {P} }\varphi (u_{1},\ldots ,u_{n})}$” is definable in ${\displaystyle M}$.
• Coherence: If ${\displaystyle p\Vdash _{M,\mathbb {P} }\varphi (u_{1},\ldots ,u_{n})}$ and ${\displaystyle q\leq p}$, then ${\displaystyle q\Vdash _{M,\mathbb {P} }\varphi (u_{1},\ldots ,u_{n})}$.

We define the forcing relation ${\displaystyle \Vdash _{M,\mathbb {P} }}$ in ${\displaystyle M}$ by induction on the complexity of formulas, in which we first define the relation for atomic formulas by ${\displaystyle \in }$-induction and then define it for arbitrary formulas by induction on their complexity.

Define we forcing relation on atomic formulas. We define forcing of both types of formulas ${\displaystyle x\in y}$ and ${\displaystyle x=y}$ simultalely. This means that we define one relation ${\displaystyle R(p,a,b,t,\mathbb {P} )}$ where ${\displaystyle t}$ denotes type of formula as follows:

1. ${\displaystyle R(p,a,b,0,\mathbb {P} )}$ means ${\displaystyle p\Vdash _{\mathbb {P} }a\in b}$.

2. ${\displaystyle R(p,a,b,1,\mathbb {P} )}$ means ${\displaystyle p\Vdash _{\mathbb {P} }a=b}$.

3. ${\displaystyle R(p,a,b,2,\mathbb {P} )}$ means ${\displaystyle p\Vdash _{\mathbb {P} }a\subseteq b}$.

Here ${\displaystyle p}$ is condition and ${\displaystyle a}$ and ${\displaystyle b}$ are ${\displaystyle \mathbb {P} }$-names. Let R(p,a,b,t,\mathbb{P}) is formula defined by ${\displaystyle \in }$-induction:

R1. ${\displaystyle R(p,a,b,0,\mathbb {P} )}$ if and only if ${\displaystyle (\forall q\leq p)(\exists r\leq q)(\exists (s,c)\in b)(r\leq s\,\land \,R(r,a,c,1,\mathbb {P} ))}$.

R2. ${\displaystyle R(p,a,b,1,\mathbb {P} )}$ if and only if ${\displaystyle R(r,a,b,2,\mathbb {P} )\,\land \,R(r,b,a,2,\mathbb {P} )}$.

R3. ${\displaystyle R(p,a,b,2,\mathbb {P} )}$ if and only if ${\displaystyle (\forall (s,c)\in a)(\forall q\leq p)(\exists r\leq q)(r\leq s\,\Rightarrow \,R(r,c,b,0,\mathbb {P} ))}$.

More formally, we use following binary relation ${\displaystyle \mathbb {P} }$-names: Let ${\displaystyle S(a,b)}$ holds for names ${\displaystyle a}$ and ${\displaystyle b}$ if and only if ${\displaystyle (p,a)\in b}$ for at least one condition ${\displaystyle p}$. This relation is well founded, which means that for any name ${\displaystyle a}$ the class of all names ${\displaystyle b}$ such that holds ${\displaystyle S(a,b)}$ is set and there are no function ${\displaystyle f:\omega \longrightarrow Names}$ such that ${\displaystyle (\forall n\in \omega )S(f(n+1),f(n))}$.

Well founded relation is not preorder in general because in general it is not transitive. But, if we consider it as "ordering", then well founded relation is relation without infinite decreasing sequences and where for any element the class of elements below it is set.

It is easy to close any binary relation for transitivity. For names ${\displaystyle a}$ and ${\displaystyle b}$ holds ${\displaystyle a if there is at least one finite sequence ${\displaystyle c_{0},\dots ,c_{n}}$ (as map with domain ${\displaystyle \{0,\dots ,n\}}$) for some ${\displaystyle n>0}$ such that ${\displaystyle c_{0}=a}$, ${\displaystyle c_{n}=b}$ and for any ${\displaystyle i holds ${\displaystyle S(c_{i-1},c_{i})}$. Such ordering is well founded to.

We define following well defined ordering on pairs of names: ${\displaystyle T((a,b),(c,d))}$ if one of following holds:

1. ${\displaystyle \max\{a,b\}<\max\{c,d\}}$,

2. ${\displaystyle \max\{a,b\}=\max\{c,d\}}$ and ${\displaystyle \min\{a,b\}<\min\{c,d\}}$,

3. ${\displaystyle \max\{a,b\}=\max\{c,d\}}$ and ${\displaystyle \min\{a,b\}=\min\{c,d\}}$ ${\displaystyle a.

Relation ${\displaystyle R(p,a,b,t,\mathbb {P} )}$ is defined by recursion on pair ${\displaystyle (a,b)}$ of names. For any pair it is defined by the same relation on "more simple" pairs. Actually, by the recursion theorem there is a formula ${\displaystyle R(p,a,b,t,\mathbb {P} )}$ such that R1, R2 and R3 are theorems because it's the truth value at some point is defined by it's truth value in "smaller" points relative to the some well founded relation used as "ordering". Now, we are ready define forcing relation.

1. ${\displaystyle p\Vdash _{\mathbb {P} }a\in b}$ means ${\displaystyle a,b\in Name\,\land \,R(p,a,b,0,\mathbb {R} )}$.

2. ${\displaystyle p\Vdash _{\mathbb {P} }a=b}$ means ${\displaystyle a,b\in Name\,\land \,R(p,a,b,1,\mathbb {R} )}$.

3. ${\displaystyle p\Vdash _{\mathbb {P} }\lnot f(a_{1},\dots ,a_{n})}$ means ${\displaystyle a_{1},\dots ,a_{n}\in Name\,\land \,\lnot (\exists q\leq p)q\Vdash _{\mathbb {P} }\lnot f(a_{1},\dots ,a_{n})}$.

4. ${\displaystyle p\Vdash _{\mathbb {P} }(f(a_{1},\dots ,a_{n})\land g(a_{1},\dots ,a_{n}))}$ means ${\displaystyle a_{1},\dots ,a_{n}\in Name\,\land \,(p\Vdash _{\mathbb {P} }f(a_{1},\dots ,a_{n}))\land (p\Vdash _{\mathbb {P} }g(a_{1},\dots ,a_{n}))}$.

5. ${\displaystyle p\Vdash _{\mathbb {P} }(\forall x)f(a_{1},\dots ,a_{n},x)}$ means ${\displaystyle a_{1},\dots ,a_{n}\in Name\,\land \,a_{1},\dots ,a_{n}\in Name\,\land \,(\forall b\in Names)p\Vdash _{\mathbb {P} }f(a_{1},\dots ,a_{n},b)}$.

Actually, this is transform of arbitrary formula ${\displaystyle f(x_{1},\dots ,x_{n})}$ to formula ${\displaystyle p\Vdash _{\mathbb {P} }f(x_{1},\dots ,x_{n})}$ where ${\displaystyle p}$ and ${\displaystyle \mathbb {P} }$ are additional variables. This is definition of forcing relation in universe of all set regardless to any countable transitive model. However, there is relation between this "syntax" formulation of forcing and "semantic" formulation of forcing over some countable transitive model ${\displaystyle M}$.

1. For any formula ${\displaystyle f(x_{1},\dots ,x_{n})}$ there is theorem ${\displaystyle T}$ of the theory ${\displaystyle ZFC}$ (for example conjunction of finite number of axioms) such that for any countable transitive model ${\displaystyle M}$ such that ${\displaystyle M\models T}$ and any atomless partial order ${\displaystyle \mathbb {P} \in M}$ and any ${\displaystyle \mathbb {P} }$-generic filter ${\displaystyle G}$ over ${\displaystyle M}$ holds

${\displaystyle (\forall a_{1},\dots ,a_{n}\in M^{\mathbb {P} })(\forall p\in \mathbb {P} )(p\Vdash _{M,\mathbb {P} }f(a_{1},\dots ,a_{n})\,\Leftrightarrow \,M\models p\Vdash _{\mathbb {P} }f(a_{1},\dots ,a_{n}))}$.

This is announced property of definability of forcing relation.

## Consistency

The discussion above can be summarized by the fundamental consistency result that, given a forcing poset ${\displaystyle \mathbb {P} }$, we may assume the existence of a generic filter ${\displaystyle G}$, not belonging to the universe ${\displaystyle V}$, such that ${\displaystyle V[G]}$ is again a set-theoretic universe that models ${\displaystyle {\mathsf {ZFC}}}$. Furthermore, all truths in ${\displaystyle V[G]}$ may be reduced to truths in ${\displaystyle V}$ involving the forcing relation.

Both styles, adjoining ${\displaystyle G}$ to either a countable transitive model ${\displaystyle M}$ or the whole universe ${\displaystyle V}$, are commonly used. Less commonly seen is the approach using the “internal” definition of forcing, in which no mention of set or class models is made. This was Cohen’s original method, and in one elaboration, it becomes the method of Boolean-valued analysis.

## Cohen forcing

The simplest nontrivial forcing poset is ${\displaystyle (\operatorname {Fin} (\omega ,2),\supseteq ,0)}$, the finite partial functions from ${\displaystyle \omega }$ to ${\displaystyle 2~{\stackrel {\text{df}}{=}}~\{0,1\}}$ under reverse inclusion. That is, a condition ${\displaystyle p}$ is essentially two disjoint finite subsets ${\displaystyle {p^{-1}}[1]}$ and ${\displaystyle {p^{-1}}[0]}$ of ${\displaystyle \omega }$, to be thought of as the “yes” and “no” parts of ${\displaystyle p}$, with no information provided on values outside the domain of ${\displaystyle p}$. “${\displaystyle q}$ is stronger than ${\displaystyle p}$” means that ${\displaystyle q\supseteq p}$, in other words, the “yes” and “no” parts of ${\displaystyle q}$ are supersets of the “yes” and “no” parts of ${\displaystyle p}$, and in that sense, provide more information.

Let ${\displaystyle G}$ be a generic filter for this poset. If ${\displaystyle p}$ and ${\displaystyle q}$ are both in ${\displaystyle G}$, then ${\displaystyle p\cup q}$ is a condition because ${\displaystyle G}$ is a filter. This means that ${\displaystyle g=\bigcup G}$ is a well-defined partial function from ${\displaystyle \omega }$ to ${\displaystyle 2}$ because any two conditions in ${\displaystyle G}$ agree on their common domain.

In fact, ${\displaystyle g}$ is a total function. Given ${\displaystyle n\in \omega }$, let ${\displaystyle D_{n}=\{p\mid p(n)~{\text{is defined}}\}}$. Then ${\displaystyle D_{n}}$ is dense. (Given any ${\displaystyle p}$, if ${\displaystyle n}$ is not in ${\displaystyle p}$’s domain, adjoin a value for ${\displaystyle n}$ — the result is in ${\displaystyle D_{n}}$.) A condition ${\displaystyle p\in G\cap D_{n}}$ has ${\displaystyle n}$ in its domain, and since ${\displaystyle p\subseteq g}$, we find that ${\displaystyle g(n)}$ is defined.

Let ${\displaystyle X={g^{-1}}[1]}$, the set of all “yes” members of the generic conditions. It is possible to give a name for ${\displaystyle X}$ directly. Let ${\displaystyle {\underline {X}}=\{({\check {n}},p)\mid p(n)=1\}}$. Then ${\displaystyle \operatorname {val} ({\underline {X}},G)=X}$. Now suppose that ${\displaystyle A\subseteq \omega }$ in ${\displaystyle V}$. We claim that ${\displaystyle X\neq A}$. Let ${\displaystyle D_{A}=\{p\mid (\exists n)(n\in \operatorname {Dom} (p)\land (p(n)=1\iff n\notin A))\}}$. Then ${\displaystyle D_{A}}$ is dense. (Given any ${\displaystyle p}$, if ${\displaystyle n}$ is not in ${\displaystyle p}$’s domain, adjoin a value for ${\displaystyle n}$ contrary to the status of “${\displaystyle n\in A}$”.) Then any ${\displaystyle p\in G\cap D_{A}}$ witnesses ${\displaystyle X\neq A}$. To summarize, ${\displaystyle X}$ is a “new” subset of ${\displaystyle \omega }$, necessarily infinite.

Replacing ${\displaystyle \omega }$ with ${\displaystyle \omega \times \omega _{2}}$, that is, consider instead finite partial functions whose inputs are of the form ${\displaystyle (n,\alpha )}$, with ${\displaystyle n<\omega }$ and ${\displaystyle \alpha <\omega _{2}}$, and whose outputs are ${\displaystyle 0}$ or ${\displaystyle 1}$, one gets ${\displaystyle \omega _{2}}$ new subsets of ${\displaystyle \omega }$. They are all distinct, by a density argument: Given ${\displaystyle \alpha <\beta <\omega _{2}}$, let ${\displaystyle D_{\alpha ,\beta }=\{p\mid (\exists n)(p(n,\alpha )\neq p(n,\beta ))\}}$, then each ${\displaystyle D_{\alpha ,\beta }}$ is dense, and a generic condition in it proves that the αth new set disagrees somewhere with the ${\displaystyle \beta }$-th new set.

This is not yet the falsification of the continuum hypothesis. One must prove that no new maps have been introduced which map ${\displaystyle \omega }$ onto ${\displaystyle \omega _{1}}$, or ${\displaystyle \omega _{1}}$ onto ${\displaystyle \omega _{2}}$. For example, if one considers instead ${\displaystyle \operatorname {Fin} (\omega ,\omega _{1})}$, finite partial functions from ${\displaystyle \omega }$ to ${\displaystyle \omega _{1}}$, the first uncountable ordinal, one gets in ${\displaystyle V[G]}$ a bijection from ${\displaystyle \omega }$ to ${\displaystyle \omega _{1}}$. In other words, ${\displaystyle \omega _{1}}$ has collapsed, and in the forcing extension, is a countable ordinal.

The last step in showing the independence of the continuum hypothesis, then, is to show that Cohen forcing does not collapse cardinals. For this, a sufficient combinatorial property is that all of the antichains of this poset are countable.

## The countable chain condition

An antichain ${\displaystyle A}$ of ${\displaystyle p}$ is a subset such that if ${\displaystyle p,q\in A}$, then ${\displaystyle p}$ and ${\displaystyle q}$ are incompatible (written ${\displaystyle p\perp q}$), meaning there is no ${\displaystyle r}$ in ${\displaystyle \mathbb {P} }$ such that ${\displaystyle r\leq p}$ and ${\displaystyle r\leq q}$. In the example on Borel sets, incompatibility means that ${\displaystyle p\cap q}$ has zero measure. In the example on finite partial functions, incompatibility means that ${\displaystyle p\cup q}$ is not a function, in other words, ${\displaystyle p}$ and ${\displaystyle q}$ assign different values to some domain input.

${\displaystyle \mathbb {P} }$ satisfies the countable chain condition (c.c.c.) if and only if every antichain in ${\displaystyle \mathbb {P} }$ is countable. (The name, which is obviously inappropriate, is a holdover from older terminology. Some mathematicians write “c.a.c.” for “countable antichain condition”.)

It is easy to see that ${\displaystyle \operatorname {Bor} (I)}$ satisfies the c.c.c. because the measures add up to at most ${\displaystyle 1}$. Also, ${\displaystyle \operatorname {Fin} (E,2)}$ is c.c.c., but the proof is more difficult.

Given an uncountable subfamily ${\displaystyle W\subseteq \operatorname {Fin} (E,2)}$, shrink ${\displaystyle W}$ to an uncountable subfamily ${\displaystyle W_{0}}$ of sets of size ${\displaystyle n}$, for some ${\displaystyle n<\omega }$. If ${\displaystyle p(e_{1})=b_{1}}$ for uncountably many ${\displaystyle p\in W_{0}}$, shrink this to an uncountable subfamily ${\displaystyle W_{1}}$ and repeat, getting a finite set ${\displaystyle \{(e_{1},b_{1}),\ldots ,(e_{k},b_{k})\}}$ and an uncountable family ${\displaystyle W_{k}}$ of incompatible conditions of size ${\displaystyle n-k}$ such that every ${\displaystyle e}$ is in ${\displaystyle \operatorname {Dom} (p)}$ for at most countable many ${\displaystyle p\in W_{k}}$. Now, pick an arbitrary ${\displaystyle p\in W_{k}}$, and pick from ${\displaystyle W_{k}}$ any ${\displaystyle q}$ that is not one of the countably many members that have a domain member in common with ${\displaystyle p}$. Then ${\displaystyle p\cup \{(e_{1},b_{1}),\ldots ,(e_{k},b_{k})\}}$ and ${\displaystyle q\cup \{(e_{1},b_{1}),\ldots ,(e_{k},b_{k})\}}$ are compatible, so ${\displaystyle W}$ is not an antichain. In other words, ${\displaystyle \operatorname {Fin} (E,2)}$-antichains are countable.

The importance of antichains in forcing is that for most purposes, dense sets and maximal antichains are equivalent. A maximal antichain ${\displaystyle A}$ is one that cannot be extended to a larger antichain. This means that every element ${\displaystyle p\in \mathbb {P} }$ is compatible with some member of ${\displaystyle A}$. The existence of a maximal antichain follows from Zorn’s Lemma. Given a maximal antichain ${\displaystyle A}$, let ${\displaystyle D=\{p\in \mathbb {P} \mid (\exists q\in A)(p\leq q)}$. Then ${\displaystyle D}$ is dense, and ${\displaystyle G\cap D\neq \varnothing }$ if and only if ${\displaystyle G\cap A\neq \varnothing }$. Conversely, given a dense set ${\displaystyle D}$, Zorn’s Lemma shows that there exists a maximal antichain ${\displaystyle A\subseteq D}$, and then ${\displaystyle G\cap D\neq \varnothing }$ if and only if ${\displaystyle G\cap A\neq \varnothing }$.

Assume that ${\displaystyle \mathbb {P} }$ is c.c.c. Given ${\displaystyle x,y\in V}$, with ${\displaystyle f:x\to y}$ a function in ${\displaystyle V[G]}$, one can approximate ${\displaystyle f}$ inside ${\displaystyle V}$ as follows. Let ${\displaystyle u}$ be a name for ${\displaystyle f}$ (by the definition of ${\displaystyle V[G]}$) and let ${\displaystyle p}$ be a condition that forces ${\displaystyle u}$ to be a function from ${\displaystyle x}$ to ${\displaystyle y}$. Define a function ${\displaystyle F}$, whose domain is ${\displaystyle x}$, by ${\displaystyle F(a)~{\stackrel {\text{df}}{=}}~\{b\mid (\exists q\in \mathbb {P} )[(q\leq p)\land (q\Vdash ~u({\check {a}})={\check {b}})]\}}$. By the definability of forcing, this definition makes sense within ${\displaystyle V}$. By the coherence of forcing, different ${\displaystyle b}$’s come from incompatible ${\displaystyle p}$’s. By c.c.c., ${\displaystyle F(a)}$ is countable.

In summary, ${\displaystyle f}$ is unknown in ${\displaystyle V}$ as it depends on ${\displaystyle G}$, but it is not wildly unknown for a c.c.c.-forcing. One can identify a countable set of guesses for what the value of ${\displaystyle f}$ is at any input, independent of ${\displaystyle G}$.

This has the following very important consequence. If in ${\displaystyle V[G]}$, ${\displaystyle f:\alpha \to \beta }$ is a surjection from one infinite ordinal onto another, then there is a surjection ${\displaystyle g:\omega \times \alpha \to \beta }$ in ${\displaystyle V}$, and consequently, a surjection ${\displaystyle h:\alpha \to \beta }$ in ${\displaystyle V}$. In particular, cardinals cannot collapse. The conclusion is that ${\displaystyle 2^{\aleph _{0}}\geq \aleph _{2}}$ in ${\displaystyle V[G]}$.

## Easton forcing

The exact value of the continuum in the above Cohen model, and variants like ${\displaystyle \operatorname {Fin} (\omega \times \kappa ,2)}$ for cardinals ${\displaystyle \kappa }$ in general, was worked out by Robert M. Solovay, who also worked out how to violate ${\displaystyle {\mathsf {GCH}}}$ (the generalized continuum hypothesis), for regular cardinals only, a finite number of times. For example, in the above Cohen model, if ${\displaystyle {\mathsf {CH}}}$ holds in ${\displaystyle V}$, then ${\displaystyle 2^{\aleph _{0}}=\aleph _{2}}$ holds in ${\displaystyle V[G]}$.

William B. Easton worked out the proper class version of violating the ${\displaystyle {\mathsf {GCH}}}$ for regular cardinals, basically showing that the known restrictions, (monotonicity, Cantor’s Theorem and König’s Theorem), were the only ${\displaystyle {\mathsf {ZFC}}}$-provable restrictions (see Easton’s Theorem).

Easton’s work was notable in that it involved forcing with a proper class of conditions. In general, the method of forcing with a proper class of conditions fails to give a model of ${\displaystyle {\mathsf {ZFC}}}$. For example, forcing with ${\displaystyle \operatorname {Fin} (\omega \times \mathbf {On} ,2)}$, where ${\displaystyle \mathbf {On} }$ is the proper class of all ordinals, makes the continuum a proper class. On the other hand, forcing with ${\displaystyle \operatorname {Fin} (\omega ,\mathbf {On} )}$ introduces a countable enumeration of the ordinals. In both cases, the resulting ${\displaystyle V[G]}$ is visibly not a model of ${\displaystyle {\mathsf {ZFC}}}$.

At one time, it was thought that more sophisticated forcing would also allow an arbitrary variation in the powers of singular cardinals. However, this has turned out to be a difficult, subtle and even surprising problem, with several more restrictions provable in ${\displaystyle {\mathsf {ZFC}}}$ and with the forcing models depending on the consistency of various large-cardinal properties. Many open problems remain.

## Random reals

Random forcing can be defined as forcing over set ${\displaystyle P}$ of all compact subsets of ${\displaystyle [0,1]}$ of positive measure ordered by relation ${\displaystyle \subseteq }$ (smaller set in context of inclusion is smaller set in ordering and represents condition with more information). There are two types of important dense sets:

1. For any positive integer ${\displaystyle n}$ the set ${\displaystyle D_{n}=\{p\in P\,|\,diam(p)<{\frac {1}{n}}\}}$ is dense, where ${\displaystyle diam(p)}$ is diameter of the set ${\displaystyle p}$.

2. For any Borel subset ${\displaystyle B}$ of ${\displaystyle [0,1]}$ of measure 1, the set ${\displaystyle D_{B}=\{p\in P\,|\,p\subseteq B\}}$ is dense.

For any filter ${\displaystyle G}$ and for any finitely many elements ${\displaystyle p_{1},\dots ,p_{n}\in G}$ there is ${\displaystyle q\in G}$ such that holds ${\displaystyle q\leq p_{1},\dots ,p_{n}}$. In case of this ordering, this means that any filter is set of compact sets with finite intersection property. For this reason, intersection of all elements of any filter is nonempty. Let ${\displaystyle G}$ is filter intersecting dense set ${\displaystyle D_{n}}$ for any positive integer ${\displaystyle n}$, then filter ${\displaystyle G}$ contains conditions of arbitrary small positive diameter. Therefore, intersection of all conditions from ${\displaystyle G}$ has diameter 0. Only nonempty sets of diameter 0 are singletons. Finally, there is exactly one real numbe ${\displaystyle r}$ such that ${\displaystyle r\in \bigcap G}$.

Let ${\displaystyle B\subseteq [0,1]}$ is any Borel set of measure 1. If ${\displaystyle G}$ intersects ${\displaystyle D_{B}}$, then ${\displaystyle r\in B}$.

However, generic filter over countable transitive model ${\displaystyle V}$ is not in ${\displaystyle V}$. Real ${\displaystyle r}$ is defined by ${\displaystyle G}$. One can prove that it is not in ${\displaystyle V}$ to. Problem is that if ${\displaystyle p\in P}$, then ${\displaystyle V\models }$"${\displaystyle p}$ is compact", but from the viewpoint of universe ${\displaystyle p}$ can be non-compact and intersection of all conditions from generic filter is actually empty. For this reason, we consider set ${\displaystyle C=\{{\bar {p}}\,|\,p\in G\}}$ of closures (in topological sense) of conditions from generic filter. Due to ${\displaystyle {\bar {p}}\supseteq p}$ and finite intersection property of ${\displaystyle G}$, the set ${\displaystyle C}$ has finite intersection property to. Elements of the set ${\displaystyle C}$ are bounded closed sets as closures of bounded sets. Therefore, ${\displaystyle C}$ is set of compacts with finite intersection property and for this reason has nonempty intersection. Due to ${\displaystyle diam({\bar {p}})=diam(p)}$ and model ${\displaystyle V}$ inherits metric from universe, the set ${\displaystyle C}$ has elements of arbitrary small diameter. Finally, there is exactly one real which belongs to all members of the set ${\displaystyle C}$. The generic filter ${\displaystyle G}$ can be reconstructed from ${\displaystyle r}$ as ${\displaystyle G=\{p\in P\,|\,r\in {\bar {p}}\}}$.

If ${\displaystyle a}$ is name of ${\displaystyle r}$, and for ${\displaystyle B\in V}$ holds ${\displaystyle V\models }$"${\displaystyle B}$ is Borel set of measure 1", then holds ${\displaystyle V[G]\models (p\Vdash _{\mathbb {P} }a\in {\check {B}})}$ for some ${\displaystyle p\in G}$. There is name ${\displaystyle a}$ such that for any generic filter ${\displaystyle G}$ holds ${\displaystyle val(a,G)\in \bigcup _{p\in G}{\bar {p}}}$. Then ${\displaystyle V[G]\models (p\Vdash _{\mathbb {P} }a\in {\check {B}})}$ holds for any condition ${\displaystyle p}$.

Every Borel set can, non-uniquely, be built up, starting from intervals with rational endpoints and applying the operations of complement and countable unions, a countable number of times. The record of such a construction is called a Borel code. Given a Borel set ${\displaystyle B}$ in ${\displaystyle V}$, one recovers a Borel code, and then applies the same construction sequence in ${\displaystyle V[G]}$, getting a Borel set ${\displaystyle B^{*}}$. One can prove that one gets the same set independent of the construction of ${\displaystyle B}$, and that basic properties are preserved. For example, if ${\displaystyle B\subseteq C}$, then ${\displaystyle B^{*}\subseteq C^{*}}$. If ${\displaystyle B}$ has measure zero, then ${\displaystyle B^{*}}$ has measure zero. This mapping ${\displaystyle B\mapsto B^{*}}$ is injective.

For any set ${\displaystyle B\subseteq [0,1]}$ such that ${\displaystyle B\in V}$ and ${\displaystyle V\models }$"${\displaystyle B}$ is Borel set of measure 1" holds ${\displaystyle r\in B^{*}}$.

This means that ${\displaystyle r}$ is "infinite random sequence of 0's and 1's" from the viewpoint of ${\displaystyle V}$ which means that it satisfies all statistical tests from the ground model ${\displaystyle V}$.

So given ${\displaystyle r}$, a random real, one can show that ${\displaystyle G=\{B~({\text{in}}~V)~|~r\in B^{*}~({\text{in}}~V[G])\}}$. Because of the mutual inter-definability between ${\displaystyle r}$ and ${\displaystyle G}$, one generally writes ${\displaystyle V[r]}$ for ${\displaystyle V[G]}$.

A different interpretation of reals in ${\displaystyle V[G]}$ was provided by Dana Scott. Rational numbers in ${\displaystyle V[G]}$ have names that correspond to countably many distinct rational values assigned to a maximal antichain of Borel sets, in other words, a certain rational-valued function on ${\displaystyle I=[0,1]}$. Real numbers in ${\displaystyle V[G]}$ then correspond to Dedekind cuts of such functions, that is, measurable functions.

## Boolean-valued models

Perhaps more clearly, the method can be explained in terms of Boolean-valued models. In these, any statement is assigned a truth value from some complete atomless Boolean algebra, rather than just a true/false value. Then an ultrafilter is picked in this Boolean algebra, which assigns values true/false to statements of our theory. The point is that the resulting theory has a model which contains this ultrafilter, which can be understood as a new model obtained by extending the old one with this ultrafilter. By picking a Boolean-valued model in an appropriate way, we can get a model that has the desired property. In it, only statements which must be true (are "forced" to be true) will be true, in a sense (since it has this extension/minimality property).

## Meta-mathematical explanation

In forcing, we usually seek to show that some sentence is consistent with ${\displaystyle {\mathsf {ZFC}}}$ (or optionally some extension of ${\displaystyle {\mathsf {ZFC}}}$). One way to interpret the argument is to assume that ${\displaystyle {\mathsf {ZFC}}}$ is consistent and then prove that ${\displaystyle {\mathsf {ZFC}}}$ combined with the new sentence is also consistent.

Each “condition” is a finite piece of information – the idea is that only finite pieces are relevant for consistency, since, by the Compactness Theorem, a theory is satisfiable if and only if every finite subset of its axioms is satisfiable. Then we can pick an infinite set of consistent conditions to extend our model. Therefore, assuming the consistency of ${\displaystyle {\mathsf {ZFC}}}$, we prove the consistency of ${\displaystyle {\mathsf {ZFC}}}$ extended by this infinite set.

## Logical explanation

By Gödel's second incompleteness theorem, one cannot prove the consistency of any sufficiently strong formal theory, such as ${\displaystyle {\mathsf {ZFC}}}$, using only the axioms of the theory itself, unless the theory is inconsistent. Consequently, mathematicians do not attempt to prove the consistency of ${\displaystyle {\mathsf {ZFC}}}$ using only the axioms of ${\displaystyle {\mathsf {ZFC}}}$, or to prove that ${\displaystyle {\mathsf {ZFC}}+H}$ is consistent for any hypothesis ${\displaystyle H}$ using only ${\displaystyle {\mathsf {ZFC}}+H}$. For this reason, the aim of a consistency proof is to prove the consistency of ${\displaystyle {\mathsf {ZFC}}+H}$ relative to the consistency of ${\displaystyle {\mathsf {ZFC}}}$. Such problems are known as problems of relative consistency. In fact one proves

(*) ${\displaystyle {\mathsf {ZFC}}\vdash \operatorname {Con} ({\mathsf {ZFC}})\rightarrow \operatorname {Con} ({\mathsf {ZFC}}+H).}$

We will now give the general schema of relative consistency proofs. As any proof is finite, it uses only a finite number of axioms:

${\displaystyle {\mathsf {ZFC}}+\lnot \operatorname {Con} ({\mathsf {ZFC}}+H)\vdash (\exists T)(\operatorname {Fin} (T)\land T\subset {\mathsf {ZFC}}\land (T\vdash \lnot H)).}$

For any given proof, ${\displaystyle {\mathsf {ZFC}}}$ can verify the validity of this proof. This is provable by induction on the length of the proof.

${\displaystyle {\mathsf {ZFC}}\vdash (\forall T)((T\vdash \lnot H)\rightarrow ({\mathsf {ZFC}}\vdash (T\vdash \lnot H))).}$

Now, we obtain

${\displaystyle {\mathsf {ZFC}}+\lnot \operatorname {Con} ({\mathsf {ZFC}}+H)\vdash (\exists T)(\operatorname {Fin} (T)\land T\subset {\mathsf {ZFC}}\land ({\mathsf {ZFC}}\vdash (T\vdash \lnot H))).}$

If we prove the following,

(**) ${\displaystyle {\mathsf {ZFC}}\vdash (\forall T)(\operatorname {Fin} (T)\land T\subset {\mathsf {ZFC}}\rightarrow ({\mathsf {ZFC}}\vdash \operatorname {Con} (T+H)))}$,

we can conclude that

${\displaystyle {\mathsf {ZFC}}+\lnot \operatorname {Con} ({\mathsf {ZFC}}+H)\vdash (\exists T)(\operatorname {Fin} (T)\land T\subset {\mathsf {ZFC}}\land ({\mathsf {ZFC}}\vdash (T\vdash \lnot H))\land ({\mathsf {ZFC}}\vdash \operatorname {Con} (T+H)))}$,

which is equivalent to

${\displaystyle {\mathsf {ZFC}}+\lnot \operatorname {Con} ({\mathsf {ZFC}}+H)\vdash \lnot \operatorname {Con} ({\mathsf {ZFC}})}$,

which gives (*). The core of the relative consistency proof is proving (**). One has to construct a ${\displaystyle {\mathsf {ZFC}}}$ proof of ${\displaystyle \operatorname {Con} (T+H)}$ for any given finite subset ${\displaystyle T}$ of the ${\displaystyle {\mathsf {ZFC}}}$ axioms (by ${\displaystyle {\mathsf {ZFC}}}$ instruments of course). (No universal proof of ${\displaystyle \operatorname {Con} (T+H)}$ of course.)

In ${\displaystyle {\mathsf {ZFC}}}$, it is provable that for any condition ${\displaystyle p}$, the set of formulas (evaluated by names) forced by ${\displaystyle p}$ is deductively closed. Furthermore, for any ${\displaystyle {\mathsf {ZFC}}}$ axiom, ${\displaystyle {\mathsf {ZFC}}}$ proves that this axiom is forced by ${\displaystyle \mathbf {1} }$. Then it suffices to prove that there is at least one condition that forces ${\displaystyle H}$.

In the case of Boolean-valued forcing, the procedure is similar – one has to prove that the Boolean value of ${\displaystyle H}$ is not ${\displaystyle \mathbf {0} }$.

Another approach uses the Reflection Theorem. For any given finite set of ${\displaystyle {\mathsf {ZFC}}}$ axioms, there is a ${\displaystyle {\mathsf {ZFC}}}$ proof that this set of axioms has a countable transitive model. For any given finite set ${\displaystyle T}$ of ${\displaystyle {\mathsf {ZFC}}}$ axioms, there is a finite set ${\displaystyle T'}$ of ${\displaystyle {\mathsf {ZFC}}}$ axioms such that ${\displaystyle {\mathsf {ZFC}}}$ proves that if a countable transitive model ${\displaystyle M}$ satisfies ${\displaystyle T'}$, then ${\displaystyle M[G]}$ satisfies ${\displaystyle T}$. One has to prove that there is finite set ${\displaystyle T''}$ of ${\displaystyle {\mathsf {ZFC}}}$ axioms such that if a countable transitive model ${\displaystyle M}$ satisfies ${\displaystyle T''}$, then ${\displaystyle M[G]}$ satisfies the hypothesis ${\displaystyle H}$. Then, for any given finite set ${\displaystyle T}$ of ${\displaystyle {\mathsf {ZFC}}}$ axioms, ${\displaystyle {\mathsf {ZFC}}}$ proves ${\displaystyle \operatorname {Con} (T+H)}$.

Sometimes in (**), a stronger theory ${\displaystyle S}$ than ${\displaystyle {\mathsf {ZFC}}}$ is used for proving ${\displaystyle \operatorname {Con} (T+H)}$. Then we have proof of the consistency of ${\displaystyle {\mathsf {ZFC}}+H}$ relative to the consistency of ${\displaystyle S}$. Note that ${\displaystyle {\mathsf {ZFC}}\vdash \operatorname {Con} ({\mathsf {ZFC}})\leftrightarrow \operatorname {Con} ({\mathsf {ZFL}})}$, where ${\displaystyle {\mathsf {ZFL}}}$ is ${\displaystyle {\mathsf {ZF}}+V=L}$ (the axiom of constructibility).