Pairwise independence

In probability theory, a pairwise independent collection of random variables is a set of random variables any two of which are independent.[1] Any collection of mutually independent random variables is pairwise independent, but some pairwise independent collections are not mutually independent. Pairwise independent random variables with finite variance are uncorrelated.

Example

Suppose X and Y are two independent tosses of a fair coin, where we designate 1 for heads and 0 for tails. Let the third random variable Z be equal to 1 if one and only one of those coin tosses resulted in "heads", and 0 otherwise. Then jointly the triple (X, Y, Z) has the following probability distribution:

$(X,Y,Z)=\left\{\begin{matrix} (0,0,0) & \text{with probability}\ 1/4, \\ (0,1,1) & \text{with probability}\ 1/4, \\ (1,0,1) & \text{with probability}\ 1/4, \\ (1,1,0) & \text{with probability}\ 1/4. \end{matrix}\right.$

It is easy to verify that[citation needed]

• X and Y are independent, and
• X and Z are independent, and
• Y and Z are independent, however
• jointly X, Y, and Z are not independent, since any of them is completely determined by the other two (any of X, Y, Z is the sum (modulo 2) of the others). That is as far from independence as random variables can get. However, X, Y, and Z are pairwise independent, i.e. in each of the pairs (XY), (XZ), and (YZ), the two random variables are independent.

Generalization

More generally, we can talk about k-wise independence, for any k ≥ 2. The idea is similar: a set of random variables is k-wise independent if every subset of size k of those variables is independent. k-wise independence has been used in theoretical computer science, where it was used to prove a theorem about the problem MAXEkSAT.