# Pi system

(Redirected from Pi-system)

In mathematics, a π-system (or pi-system) on a set Ω is a collection P of certain subsets of Ω, such that

• P is non-empty.
• A ∩ B ∈ P whenever A and B are in P.

That is, P is a non-empty family of subsets of Ω that is closed under finite intersections. The importance of π-systems arise from the fact that if two probability measures agree on a π-system, then they agree on the σ-algebra generated by that π-system. Moreover, if other properties, such as equality of integrals, hold for the π-system, then they hold for the generated σ-algebra as well. This is the case whenever the collection of subsets for which the property holds is a λ-system. π-systems are also useful for checking independence of random variables.

This is desirable because in practice, π-systems are often simpler to work with than σ-algebras. For example, it may be awkward to work with σ-algebras generated by infinitely many sets $\sigma(E_1, E_2, \ldots)$. So instead we may examine the union of all σ-algebras generated by finitely many sets $\bigcup_n \sigma(E_1, \ldots, E_n)$. This forms a π-system that generates the desired σ-algebra. Another example is the collection of all interval subsets of the real line, along with the empty set, which is a π-system that generates the very important Borel σ-algebra of subsets of the real line.

## Examples

• On the real line $\mathbb{R}$, the intervals $(-\infty, a]$ form a π-system. Similarly, the intervals $(a, b]$ form a π-system, if the empty set is also included.
• The topology (collection of open subsets) of any topological space is a π-system.
• For any collection Σ of subsets of Ω, there exists a π-system $\mathcal I_{\Sigma}$ which is the unique smallest π-system of Ω to contain every element of Σ, and is called the π-system generated by Σ.
• For any measurable function $f \colon \Omega \rightarrow \mathbb{R}$, the set $\mathcal{I}_f = \left \{ f^{-1}\left(\left( - \infty, x \right]\right) \colon x \in \mathbb{R} \right \}$ defines a π-system, and is called the π-system generated by f. (Alternatively, $\left \{ f^{-1}\left(\left(a, b \right]\right) \colon a,b \in \mathbb{R}, a defines a π-system generated by $f$.)
• If P1 and P2 are π-systems for Ω1 and Ω2, respectively, then $\{A_1\times A_2:A_1\in P_1, A_2\in P_2\}$ is a π-system for the product space Ω1×Ω2.
• Any σ-algebra is a π-system.

## Relationship to λ-Systems

A λ-system on Ω is a set D of subsets of Ω, satisfying

• $\Omega\in D$,
• if $A\in D$ then $A^c\in D$,
• if $A_1, A_2, A_3, \dots$ is a sequence of disjoint subsets in $D$ then $\cup_{n=1}^{\infty}A_n\in D$.

Whilst it is true that any σ-algebra satisfies the properties of being both a π-system and a λ-system, it is not true that any π-system is a λ-system, and moreover it is not true that any π-system is a σ-algebra. However, a useful classification is that any set system which is both a λ-system and a π-system is a σ-algebra. This is used as a step in proving the π-λ theorem.

### The π-λ Theorem

Let $D$ be a λ-system, and let $\mathcal{I} \subseteq D$ be a π-system contained in $D$. The π-λ Theorem[1] states that the σ-algebra $\sigma( \mathcal{I})$ generated by $\mathcal{I}$ is contained in $D$: $\sigma( \mathcal{I}) \subset D$.

The π-λ theorem can be used to prove many elementary measure theoretic results. For instance, it is used in proving the uniqueness claim of the Carathéodory extension theorem for σ-finite measures.[2]

The π-λ theorem is closely related to the monotone class theorem, which provides a similar relationship between monotone classes and algebras, and can be used to derive many of the same results. Since π-systems are simpler classes than algebras, it can be easier to identify the sets that are in them while, on the other hand, checking whether the property under consideration determines a λ-system is often relatively easy. Despite the difference between the two theorems, the π-λ theorem is sometimes referred to as the monotone class theorem.[1]

#### Example

Let μ1 , μ2 : F → R be two measures on the σ-algebra F, and suppose that F = σ(I) is generated by a π-system I. If

1. μ1(A) = μ2(A), A I, and
2. μ1(Ω) = μ2(Ω) < ,

then μ1 = μ2. This is the uniqueness statement of the Carathéodory extension theorem for finite measures. If this result does not seem very remarkable, consider the fact that it usually is very difficult or even impossible to fully describe every set in the σ-algebra, and so the problem of equating measures would be completely hopeless without such a tool.

Idea of Proof[2] Define the collection of sets

$D = \left\{ A \in \sigma(I) \colon \mu_1(A) = \mu_2(A) \right\}.$

By the first assumption, μ1 and μ2 agree on I and thus I D. By the second assumption, Ω D, and it can further be shown that D is a λ-system. It follows from the π-λ theorem that σ(I) D σ(I), and so D = σ(I). That is to say, the measures agree on σ(I).

## π-Systems in Probability

π-systems are more commonly used in the study of probability theory than in the general field of measure theory. This is primarily due to probabilistic notions such as independence, though it may also be a consequence of the fact that the π-λ theorem was proven by the probabilist Eugene Dynkin. Standard measure theory texts typically prove the same results via monotone classes, rather than π-systems.

### Equality in Distribution

The π-λ theorem motivates the common definition of the probability distribution of a random variable $X \colon(\Omega, \mathcal F, \mathbb P) \rightarrow \mathbb R$ in terms of its cumulative distribution function. Recall that the cumulative distribution of a random variable is defined as

$F_X(a) = \mathbb{P}\left[ X \leq a \right], \qquad a \in \mathbb{R}$,

whereas the seemingly more general law of the variable is the probability measure

$\mathcal{L}_X(B) = \mathbb{P}\left[ X^{-1}(B) \right], \qquad B \in \mathcal{B}(\mathbb R)$,

where $\mathcal{B}(\mathbb R)$ is the Borel σ-algebra. We say that the random variables $X \colon(\Omega, \mathcal F, \mathbb P)$, and $Y \colon(\tilde\Omega,\tilde{ \mathcal F}, \tilde{\mathbb P}) \rightarrow \mathbb R$ (on two possibly different probability spaces) are equal in distribution (or law), $X \stackrel{\mathcal D}{=} Y$, if they have the same cumulative distribution functions, FX = FY. The motivation for the definition stems from the observation that if FX = FY, then that is exactly to say that $\mathcal{L}_X$ and $\mathcal{L}_Y$ agree on the π-system $\left\{(-\infty, a] \colon a \in \mathbb R \right\}$ which generates $\mathcal{B}(\mathbb R)$, and so by the example above: $\mathcal{L}_X = \mathcal{L}_Y$.

A similar result holds for the joint distribution of a random vector. For example, suppose X and Y are two random variables defined on the same probability space $(\Omega, \mathcal{F}, \mathbb{P})$, with respectively generated π-systems $\mathcal{I}_X$ and $\mathcal{I}_Y$. The joint cumulative distribution function of (X,Y) is

$F_{X,Y}(a,b) = \mathbb{P}\left[ X \leq a,Y\leq b \right] =\mathbb{P}\left[ X^{-1}((-\infty,a]) \cap Y^{-1}((-\infty,b]) \right], \qquad a,b \in \mathbb{R}$.

However, $A=X^{-1}((-\infty,a])\in\mathcal{I}_X$ and $B=Y^{-1}((-\infty,b])\in\mathcal{I}_Y$. Since

$\mathcal{I}_{X,Y} = \{A\cap B:A \in \mathcal{I}_X, \, B \in \mathcal{I}_Y\}$

is a π-system generated by the random pair (X,Y), the π-λ theorem is used to show that the joint cumulative distribution function suffices to determine the joint law of (X,Y). In other words, (X,Y) and (W,Z) have the same distribution if and only if they have the same joint cumulative distribution function.

In the theory of stochastic processes, two processes $(X_t)_{t \in T}, (Y_t)_{t \in T}$ are known to be equal in distribution if and only if they agree on all finite-dimensional distributions. i.e. for all $t_1,\ldots, t_n \in T, \, n \in \mathbb N$.

$(X_{t_1},\ldots, X_{t_n}) \stackrel{\mathcal{D}}{=} (Y_{t_1},\ldots, Y_{t_n})$.

The proof of this is another application of the π-λ theorem.[3]

### Independent Random Variables

The theory of π-system plays an important role in the probabilistic notion of independence. If X and Y are two random variables defined on the same probability space $(\Omega, \mathcal{F}, \mathbb{P})$ then the random variables are independent if and only if their π-systems $\mathcal{I}_X, \mathcal{I}_Y$ satisfy

$\mathbb{P}\left[ A \cap B \right] = \mathbb{P}\left[ A \right] \mathbb{P}\left[ B \right], \qquad \forall A \in \mathcal{I}_X, \, B \in \mathcal{I}_Y,$

which is to say that $\mathcal{I}_X, \mathcal{I}_Y$ are independent. This actually is a special case of the use of π-systems for determining the distribution of (X,Y).

#### Example

Let $Z = (Z_1, Z_2)$, where $Z_1, Z_2 \sim \mathcal{N}(0,1)$ are iid standard normal random variables. Define the radius and argument (arctan) variables

$R = \sqrt{Z_1^2 + Z_2^2}, \qquad \Theta = \tan^{-1}(Z_2/Z_1)$.

Then $R$ and $\Theta$ are independent random variables.

To prove this, it is sufficient to show that the π-systems $\mathcal{I}_R, \mathcal{I}_\Theta$ are independent: i.e.

$\mathbb P [ R \leq \rho, \Theta \leq \theta] = \mathbb P[R \leq \rho] \mathbb P[\Theta \leq \theta] \quad \forall \rho \in [0,\infty), \, \theta \in [0,2\pi].$

Confirming that this is the case is an exercise in changing variables. Fix $\rho \in [0,\infty), \, \theta \in [0,2\pi]$, then the probability can be expressed as an integral of the probability density function of $Z$.

\begin{align} \mathbb P [ R \leq \rho, \Theta \leq \theta] &= \int_{R \leq \rho, \, \Theta \leq \theta} \frac{1}{2\pi}\exp\left({-\frac12(z_1^2 + z_2^2)}\right) dz_1dz_2 \\ & = \int_0^\theta \int_0^\rho \frac{1}{2\pi}e^{-\frac{r^2}{2}}r dr d\tilde\theta \\ & = \left( \int_0^\theta \frac{1}{2\pi}d\tilde \theta \right) \left( \int_0^\rho e^{-\frac{r^2}{2}}r dr\right) \\ & = \mathbb P[\Theta \leq \theta]\mathbb P[R \leq \rho]. \end{align}