# Conditional independence

(Redirected from Conditionally independent)

In probability theory, two random events ${\displaystyle A}$ and ${\displaystyle B}$ are conditionally independent given a third event ${\displaystyle C}$ precisely if the occurrence of ${\displaystyle A}$ and the occurrence of ${\displaystyle B}$ are independent events in their conditional probability distribution given ${\displaystyle C}$. In other words, ${\displaystyle A}$ and ${\displaystyle B}$ are conditionally independent given ${\displaystyle C}$ if and only if, given knowledge that ${\displaystyle C}$ occurs, knowledge of whether ${\displaystyle A}$ occurs provides no information on the likelihood of ${\displaystyle B}$ occurring, and knowledge of whether ${\displaystyle B}$ occurs provides no information on the likelihood of ${\displaystyle A}$ occurring.

The concept of conditional independence can be extended from random events to random variables and random vectors.

## Conditional independence of events

### Definition

In the standard notation of probability theory, ${\displaystyle A}$ and ${\displaystyle B}$ are conditionally independent given ${\displaystyle C}$ if and only if ${\displaystyle \Pr(A\cap B\mid C)=\Pr(A\mid C)\Pr(B\mid C)}$. Conditional independence of ${\displaystyle A}$ and ${\displaystyle B}$ given ${\displaystyle C}$ is denoted by ${\displaystyle (A\perp \!\!\!\perp B)\mid C}$. Formally:

${\displaystyle (A\perp \!\!\!\perp B)\mid C\quad \iff \quad \Pr(A\cap B\mid C)=\Pr(A\mid C)\Pr(B\mid C)}$

(Eq.1)

or equivalently,

${\displaystyle (A\perp \!\!\!\perp B)\mid C\quad \iff \quad \Pr(A\mid B\cap C)=\Pr(A\mid C)\quad {\text{or}}\quad \Pr(B\mid C)=1.}$

### Examples

The discussion on StackExchange provides a couple of useful examples.[1]

#### Coloured boxes

Each cell represents a possible outcome. The events ${\displaystyle R}$, ${\displaystyle B}$ and ${\displaystyle Y}$ are represented by the areas shaded red, blue and yellow respectively. The overlap between the events ${\displaystyle R}$ and ${\displaystyle B}$ is shaded purple.

The probabilities of these events are shaded areas with respect to the total area. In both examples ${\displaystyle R}$ and ${\displaystyle B}$ are conditionally independent given ${\displaystyle Y}$ because:

${\displaystyle \Pr(R\cap B\mid Y)=\Pr(R\mid Y)\Pr(B\mid Y)\,}$[2]

but not conditionally independent given ${\displaystyle \left[{\text{not }}Y\right]}$ because:

${\displaystyle \Pr(R\cap B\mid {\text{not }}Y)\not =\Pr(R\mid {\text{not }}Y)\Pr(B\mid {\text{not }}Y).\,}$

#### Weather and delays

Let the two events be the probabilities of persons A and B getting home in time for dinner, and the third event is the fact that a snow storm hit the city. While both A and B have a lower probability of getting home in time for dinner, the lower probabilities will still be independent of each other. That is, the knowledge that A is late does not tell you whether B will be late. (They may be living in different neighborhoods, traveling different distances, and using different modes of transportation.) However, if you have information that they live in the same neighborhood, use the same transportation, and work at the same place, then the two events are NOT conditionally independent.

#### Dice rolling

Conditional independence depends on the nature of the third event. If you roll two dice, one may assume that the two dice behave independently of each other. Looking at the results of one dice will not tell you about the result of the second dice. (That is, the two dice are independent.) If, however, the 1st die's result is a 3, and someone tells you about a third event - that the sum of the two results is even - then this extra unit of information restricts the options for the 2nd result to an odd number. In other words, two events can be independent, but NOT conditionally independent.

#### Height and vocabulary of kids

Height and vocabulary are not independent; but they are conditionally independent if you add age.

## Conditional independence of random variables

Two random variables ${\displaystyle X}$ and ${\displaystyle Y}$ are conditionally independent given a third random variable ${\displaystyle Z}$ if and only if they are independent in their conditional probability distribution given ${\displaystyle Z}$. That is, ${\displaystyle X}$ and ${\displaystyle Y}$ are conditionally independent given ${\displaystyle Z}$ if and only if, given any value of ${\displaystyle Z}$, the probability distribution of ${\displaystyle X}$ is the same for all values of ${\displaystyle Y}$ and the probability distribution of ${\displaystyle Y}$ is the same for all values of ${\displaystyle X}$. Formally:

${\displaystyle (X\perp \!\!\!\perp Y)\mid Z\quad \iff \quad F_{X,Y\,\mid \,Z\,=\,z}(x,y)=F_{X\,\mid \,Z\,=\,z}(x)\cdot F_{Y\,\mid \,Z\,=\,z}(y)\quad {\text{for all }}x,y,z}$

(Eq.2)

where ${\displaystyle F_{X,Y\,\mid \,Z\,=\,z}(x,y)=\Pr(X\leq x,Y\leq y\mid Z=z)}$ is the conditional cumulative distribution function of ${\displaystyle X}$ and ${\displaystyle Y}$ given ${\displaystyle Z}$.

Two events ${\displaystyle R}$ and ${\displaystyle B}$ are conditionally independent given a σ-algebra ${\displaystyle \Sigma }$ if

${\displaystyle \Pr(R\cap B\mid \Sigma )=\Pr(R\mid \Sigma )\Pr(B\mid \Sigma ){\text{ a.s.}}}$

where ${\displaystyle \Pr(A\mid \Sigma )}$ denotes the conditional expectation of the indicator function of the event ${\displaystyle A}$, ${\displaystyle \chi _{A}}$, given the sigma algebra ${\displaystyle \Sigma }$. That is,

${\displaystyle \Pr(A\mid \Sigma ):=\operatorname {E} [\chi _{A}\mid \Sigma ].}$

Two random variables ${\displaystyle X}$ and ${\displaystyle Y}$ are conditionally independent given a σ-algebra ${\displaystyle \Sigma }$ if the above equation holds for all ${\displaystyle R}$ in ${\displaystyle \sigma (X)}$ and B in ${\displaystyle \sigma (Y)}$.

Two random variables ${\displaystyle X}$ and ${\displaystyle Y}$ are conditionally independent given a random variable ${\displaystyle W}$ if they are independent given σ(W): the σ-algebra generated by ${\displaystyle W}$. This is commonly written:

${\displaystyle X\perp \!\!\!\perp Y\mid W}$ or
${\displaystyle X\perp Y\mid W}$

This is read "${\displaystyle X}$ is independent of ${\displaystyle Y}$, given ${\displaystyle W}$"; the conditioning applies to the whole statement: "(${\displaystyle X}$ is independent of ${\displaystyle Y}$) given ${\displaystyle W}$".

${\displaystyle (X\perp \!\!\!\perp Y)\mid W}$

If ${\displaystyle W}$ assumes a countable set of values, this is equivalent to the conditional independence of X and Y for the events of the form ${\displaystyle [W=w]}$. Conditional independence of more than two events, or of more than two random variables, is defined analogously.

The following two examples show that ${\displaystyle X\perp \!\!\!\perp Y}$ neither implies nor is implied by ${\displaystyle (X\perp \!\!\!\perp Y)\mid W}$. First, suppose ${\displaystyle W}$ is 0 with probability 0.5 and 1 otherwise. When W = 0 take ${\displaystyle X}$ and ${\displaystyle Y}$ to be independent, each having the value 0 with probability 0.99 and the value 1 otherwise. When ${\displaystyle W=1}$, ${\displaystyle X}$ and ${\displaystyle Y}$ are again independent, but this time they take the value 1 with probability 0.99. Then ${\displaystyle (X\perp \!\!\!\perp Y)\mid W}$. But ${\displaystyle X}$ and ${\displaystyle Y}$ are dependent, because Pr(X = 0) < Pr(X = 0|Y = 0). This is because Pr(X = 0) = 0.5, but if Y = 0 then it's very likely that W = 0 and thus that X = 0 as well, so Pr(X = 0|Y = 0) > 0.5. For the second example, suppose ${\displaystyle X\perp \!\!\!\perp Y}$, each taking the values 0 and 1 with probability 0.5. Let ${\displaystyle W}$ be the product ${\displaystyle X\cdot Y}$. Then when ${\displaystyle W=0}$, Pr(X = 0) = 2/3, but Pr(X = 0|Y = 0) = 1/2, so ${\displaystyle (X\perp \!\!\!\perp Y)\mid W}$ is false. This is also an example of Explaining Away. See Kevin Murphy's tutorial [3] where ${\displaystyle X}$ and ${\displaystyle Y}$ take the values "brainy" and "sporty".

## Conditional independence of random vectors

Two random vectors ${\displaystyle \mathbf {X} =(X_{1},\ldots ,X_{l})^{\mathrm {T} }}$ and ${\displaystyle \mathbf {Y} =(Y_{1},\ldots ,Y_{m})^{\mathrm {T} }}$ are conditionally independent given a third random vector ${\displaystyle \mathbf {Z} =(Z_{1},\ldots ,Z_{n})^{\mathrm {T} }}$ if and only if they are independent in their conditional cumulative distribution given ${\displaystyle \mathbf {Z} }$. Formally:

${\displaystyle (\mathbf {X} \perp \!\!\!\perp \mathbf {Y} )\mid \mathbf {Z} \quad \iff \quad F_{\mathbf {X} ,\mathbf {Y} |\mathbf {Z} =\mathbf {z} }(\mathbf {x} ,\mathbf {y} )=F_{\mathbf {X} \,\mid \,\mathbf {Z} \,=\,\mathbf {z} }(\mathbf {x} )\cdot F_{\mathbf {Y} \,\mid \,\mathbf {Z} \,=\,\mathbf {z} }(\mathbf {y} )\quad {\text{for all }}\mathbf {x} ,\mathbf {y} ,\mathbf {z} }$

(Eq.3)

where ${\displaystyle \mathbf {x} =(x_{1},\ldots ,x_{l})^{\mathrm {T} }}$, ${\displaystyle \mathbf {y} =(y_{1},\ldots ,y_{m})^{\mathrm {T} }}$ and ${\displaystyle \mathbf {z} =(z_{1},\ldots ,z_{n})^{\mathrm {T} }}$ and the conditional cumulative distributions are defined as follows.

{\displaystyle {\begin{aligned}F_{\mathbf {X} ,\mathbf {Y} \,\mid \,\mathbf {Z} \,=\,\mathbf {z} }(\mathbf {x} ,\mathbf {y} )&=\Pr(X_{1}\leq x_{1},\ldots ,X_{l}\leq x_{l},Y_{1}\leq y_{1},\ldots ,Y_{m}\leq y_{m}\mid Z_{1}=z_{1},\ldots ,Z_{n}=z_{n})\\[6pt]F_{\mathbf {X} \,\mid \,\mathbf {Z} \,=\,\mathbf {z} }(\mathbf {x} )&=\Pr(X_{1}\leq x_{1},\ldots ,X_{l}\leq x_{l}\mid Z_{1}=z_{1},\ldots ,Z_{n}=z_{n})\\[6pt]F_{\mathbf {Y} \,\mid \,\mathbf {Z} \,=\,\mathbf {z} }(\mathbf {y} )&=\Pr(Y_{1}\leq y_{1},\ldots ,Y_{m}\leq y_{m}\mid Z_{1}=z_{1},\ldots ,Z_{n}=z_{n})\end{aligned}}}

## Uses in Bayesian inference

Let p be the proportion of voters who will vote "yes" in an upcoming referendum. In taking an opinion poll, one chooses n voters randomly from the population. For i = 1, ..., n, let Xi = 1 or 0 corresponding, respectively, to whether or not the ith chosen voter will or will not vote "yes".

In a frequentist approach to statistical inference one would not attribute any probability distribution to p (unless the probabilities could be somehow interpreted as relative frequencies of occurrence of some event or as proportions of some population) and one would say that X1, ..., Xn are independent random variables.

By contrast, in a Bayesian approach to statistical inference, one would assign a probability distribution to p regardless of the non-existence of any such "frequency" interpretation, and one would construe the probabilities as degrees of belief that p is in any interval to which a probability is assigned. In that model, the random variables X1, ..., Xn are not independent, but they are conditionally independent given the value of p. In particular, if a large number of the Xs are observed to be equal to 1, that would imply a high conditional probability, given that observation, that p is near 1, and thus a high conditional probability, given that observation, that the next X to be observed will be equal to 1.

## Rules of conditional independence

A set of rules governing statements of conditional independence have been derived from the basic definition.[4][5]

Note: since these implications hold for any probability space, they will still hold if one considers a sub-universe by conditioning everything on another variable, say K. For example, ${\displaystyle X\perp \!\!\!\perp Y\Rightarrow Y\perp \!\!\!\perp X}$ would also mean that ${\displaystyle X\perp \!\!\!\perp Y\mid K\Rightarrow Y\perp \!\!\!\perp X\mid K}$.

Note: below, the comma can be read as an "AND".

### Symmetry

${\displaystyle X\perp \!\!\!\perp Y\quad \Rightarrow \quad Y\perp \!\!\!\perp X}$

### Decomposition

${\displaystyle X\perp \!\!\!\perp A,B\quad \Rightarrow \quad {\text{ and }}{\begin{cases}X\perp \!\!\!\perp A\\X\perp \!\!\!\perp B\end{cases}}}$

Proof:

• ${\displaystyle p_{X,A,B}(x,a,b)=p_{X}(x)p_{A,B}(a,b)}$      (meaning of ${\displaystyle X\perp \!\!\!\perp A,B}$)
• ${\displaystyle \int _{B}\!p_{X,A,B}(x,a,b)\,db=\int _{B}\!p_{X}(x)p_{A,B}(a,b)\,db}$      (ignore variable B by integrating it out)
• ${\displaystyle p_{X,A}(x,a)=p_{X}(x)p_{A}(a)}$

A similar proof shows the independence of X and B.

### Weak union

${\displaystyle X\perp \!\!\!\perp A,B\quad \Rightarrow \quad {\text{ and }}{\begin{cases}X\perp \!\!\!\perp A\mid B\\X\perp \!\!\!\perp B\mid A\end{cases}}}$

Proof:

• By definition, ${\displaystyle \Pr(X)=\Pr(X\mid A,B)}$.
• Due to the property of decomposition ${\displaystyle X\perp \!\!\!\perp B}$, ${\displaystyle \Pr(X)=\Pr(X\mid B)}$.
• Combining the above two equalities gives ${\displaystyle \Pr(X\mid B)=\Pr(X\mid A,B)}$, which establishes ${\displaystyle X\perp \!\!\!\perp A\mid B}$.

The second condition can be proved similarly.

### Contraction

{\displaystyle \left.{\begin{aligned}X\perp \!\!\!\perp A\mid B\\X\perp \!\!\!\perp B\end{aligned}}\right\}{\text{ and }}\quad \Rightarrow \quad X\perp \!\!\!\perp A,B}

Proof:

This property can be proved by noticing ${\displaystyle \Pr(X\mid A,B)=\Pr(X\mid B)=\Pr(X)}$, each equality of which is asserted by ${\displaystyle X\perp \!\!\!\perp A\mid B}$ and ${\displaystyle X\perp \!\!\!\perp B}$, respectively.

### Contraction-weak-union-decomposition

Putting the above three together, we have:

{\displaystyle \left.{\begin{aligned}X\perp \!\!\!\perp A\mid B\\X\perp \!\!\!\perp B\end{aligned}}\right\}{\text{ and }}\quad \iff \quad X\perp \!\!\!\perp A,B\quad \Rightarrow \quad {\text{ and }}{\begin{cases}X\perp \!\!\!\perp A\mid B\\X\perp \!\!\!\perp B\\X\perp \!\!\!\perp B\mid A\\X\perp \!\!\!\perp A\\\end{cases}}}[citation needed]

### Intersection

For strictly positive probability distributions,[5] the following also holds:

{\displaystyle \left.{\begin{aligned}X\perp \!\!\!\perp A\mid C,B\\X\perp \!\!\!\perp B\mid C,A\end{aligned}}\right\}{\text{ and }}\quad \Rightarrow \quad X\perp \!\!\!\perp B,A\mid C}

The five rules above were termed "Graphoid Axioms" by Pearl and Paz,[6] because they hold in graphs, if ${\displaystyle X\perp \!\!\!\perp A\mid B}$ is interpreted to mean: "All paths from X to A are intercepted by the set B".[7]

6. ^ Pearl, Judea; Paz, Azaria (1985). "Graphoids: A Graph-Based Logic for Reasoning About Relevance Relations". Missing or empty |url= (help)