# Multivariate mutual information Venn diagram of information theoretic measures for three variables x, y, and z, represented by the lower left, lower right, and upper circles, respectively. The multivariate mutual information is represented by gray region. Since it may be negative, the areas on the diagram represent signed measures.

In information theory there have been various attempts over the years to extend the definition of mutual information to more than two random variables. The expression and study of multivariate higher-degree mutual-information was achieved in two seemingly independent works: McGill (1954)  who called these functions “interaction information”, and Hu Kuo Ting (1962)  who also first proved the possible negativity of mutual-information for degrees higher than 2 and justified algebraically the intuitive correspondence to Venn diagrams .

## Definition

The conditional mutual information can be used to inductively define a multivariate mutual information (MMI) in a set- or measure-theoretic sense in the context of information diagrams. In this sense we define the multivariate mutual information as follows:

$I(X_{1};\ldots ;X_{n+1})=I(X_{1};\ldots ;X_{n})-I(X_{1};\ldots ;X_{n}|X_{n+1}),$ where

$I(X_{1};\ldots ;X_{n}|X_{n+1})=\mathbb {E} _{X_{n+1}}[D_{\mathrm {KL} }(P_{(X_{1},\ldots ,X_{n})|X_{n+1}}\|P_{X_{1}|X_{n+1}}\otimes \cdots \otimes P_{X_{n}|X_{n+1}})].$ This definition is identical to that of interaction information except for a change in sign in the case of an odd number of random variables.

Alternatively, the multivariate mutual information may be defined in measure-theoretic terms as the intersection of the individual entropies $\mu ({\tilde {X}}_{i})$ :

$I(X_{1};X_{2};...;X_{n+1})=\mu \left(\bigcap _{i=1}^{n+1}{\tilde {X}}_{i}\right)$ Defining ${\tilde {Y}}=\bigcap _{i=1}^{n}{\tilde {X}}_{i}$ , the set-theoretic identity ${\tilde {A}}=({\tilde {A}}\cap {\tilde {B}})\cup ({\tilde {A}}\backslash {\tilde {B}})$ which corresponds to the measure-theoretic statement $\mu ({\tilde {A}})=\mu ({\tilde {A}}\cap {\tilde {B}})+\mu ({\tilde {A}}\backslash {\tilde {B}})$ ,:p.63 allows the above to be rewritten as:

$I(X_{1};X_{2};...;X_{n+1})=\mu ({\tilde {Y}}\cap {\tilde {X}}_{n+1})=\mu ({\tilde {Y}})-\mu ({\tilde {Y}}\backslash {\tilde {X}}_{n+1})$ which is identical to the first definition.

## Properties

Multi-variate information and conditional multi-variate information can be decomposed into a sum of entropies.

$I(X_{1};\ldots ;X_{n})=-\sum _{T\subseteq \{1,\ldots ,n\}}(-1)^{|T|}H(T)$ $I(X_{1};\ldots ;X_{n}|Y)=-\sum _{T\subseteq \{1,\ldots ,n\}}(-1)^{|T|}H(T|Y)$ ## Multivariate statistical independence

The multivariate mutual-information functions generalize the pairwise independence case that states that $X_{1},X_{2}$ if and only if $I(X_{1};X_{2})=0$ , to arbitrary numerous variable. n variables are mutually independent if and only if the $2^{n}-n-1$ mutual information functions vanish $I(X_{1};...;X_{k})=0$ with $n\geq k\geq 2$ (theorem 2 ). In this sens, the $I(X_{1};...;X_{k})=0$ can be used as a refined statistical independence criterion.

## Synergy and redundancy

The multivariate mutual information may be positive, negative or zero. The positivity corresponds to relations generalizing the pairwise correlations, nullity corresponds to a refined notion of independence, and negativity detects high dimensional "emergent" relations and clusterized datapoints  ). For the simplest case of three variables X, Y, and Z, knowing, say, X yields a certain amount of information about Z. This information is just the mutual information $I(Z;X)$ (yellow and gray in the Venn diagram above). Likewise, knowing Y will also yield a certain amount of information about Z, that being the mutual information $I(Y;Z)$ (cyan and gray in the Venn diagram above). The amount of information about Z which is yielded by knowing both X and Y together is the information that is mutual to Z and the X,Y pair, written $I(X,Y;Z)$ (yellow, gray and cyan in the Venn diagram above) and it may be greater than, equal to, or less than the sum of the two mutual information, this difference being the multivariate mutual information: $I(X;Y;Z)=I(Y;Z)+I(Z;X)-I(X,Y;Z)$ . In the case where the sum of the two mutual information is greater than $I(X,Y;Z)$ , the multivariate mutual information will be positive. In this case, some of the information about Z provided by knowing X is also provided by knowing Y, causing their sum to be greater than the information about Z from knowing both together. That is to say, there is a "redundancy" in the information about Z provided by the X and Y variables. In the case where the sum of the mutual information is less than $I(X,Y;Z)$ , the multivariate mutual information will be negative. In this case, knowing both X and Y together provides more information about Z than the sum of the information yielded by knowing either one alone. That is to say, there is a "synergy" in the information about Z provided by the X and Y variables. The above explanation is intended to give an intuitive understanding of the multivariate mutual information, but it obscures the fact that it does not depend upon which variable is the subject (e.g., Z in the above example) and which other two are being thought of as the source of information. For 3 variables, Brenner et al. applied multivariate mutual information to neural coding and called its negativity "synergy"  and Watkinson et al. applied it to genetic expression 

### Example of positive multivariate mutual information (redundancy)

Positive MMI is typical of common-cause structures. For example, clouds cause rain and also block the sun; therefore, the correlation between rain and darkness is partly accounted for by the presence of clouds, $I({\text{rain}};{\text{dark}}|{\text{cloud}})\leq I({\text{rain}};{\text{dark}})$ . The result is positive MMI $I({\text{rain}};{\text{dark}};{\text{cloud}})$ .

### Examples of negative multivariate mutual information (synergy)

The case of negative MMI is infamously non-intuitive. A prototypical example of negative $I(X;Y;Z)$ has $X$ as the output of an XOR gate to which $Y$ and $Z$ are the independent random inputs. In this case $I(Y;Z)$ will be zero, but $I(Y;Z|X)$ will be positive (1 bit) since once output $X$ is known, the value on input $Y$ completely determines the value on input $Z$ . Since $I(Y;Z|X)>I(Y;Z)$ , the result is negative MMI $I(X;Y;Z)$ . It may seem that this example relies on a peculiar ordering of $X,Y,Z$ to obtain the positive interaction, but the symmetry of the definition for $I(X;Y;Z)$ indicates that the same positive interaction information results regardless of which variable we consider as the interloper or conditioning variable. For example, input $Y$ and output $X$ are also independent until input $Z$ is fixed, at which time they are totally dependent.

This situation is an instance where fixing the common effect $X$ of causes $Y$ and $Z$ induces a dependency among the causes that did not formerly exist. This behavior is colloquially referred to as explaining away and is thoroughly discussed in the Bayesian Network literature (e.g., Pearl 1988). Pearl's example is auto diagnostics: A car's engine can fail to start $(X)$ due either to a dead battery $(Y)$ or due to a blocked fuel pump $(Z)$ . Ordinarily, we assume that battery death and fuel pump blockage are independent events, because of the essential modularity of such automotive systems. Thus, in the absence of other information, knowing whether or not the battery is dead gives us no information about whether or not the fuel pump is blocked. However, if we happen to know that the car fails to start (i.e., we fix common effect $X$ ), this information induces a dependency between the two causes battery death and fuel blockage. Thus, knowing that the car fails to start, if an inspection shows the battery to be in good health, we conclude the fuel pump is blocked.

Battery death and fuel blockage are thus dependent, conditional on their common effect car starting. The obvious directionality in the common-effect graph belies a deep informational symmetry: If conditioning on a common effect increases the dependency between its two parent causes, then conditioning on one of the causes must create the same increase in dependency between the second cause and the common effect. In Pearl's automotive example, if conditioning on car starts induces $I(X;Y;Z)$ bits of dependency between the two causes battery dead and fuel blocked, then conditioning on fuel blocked must induce $I(X;Y;Z)$ bits of dependency between battery dead and car starts. This may seem odd because battery dead and car starts are governed by the implication battery dead $\rightarrow$ car doesn't start. However, these variables are still not totally correlated because the converse is not true. Conditioning on fuel blocked removes the major alternate cause of failure to start, and strengthens the converse relation and therefore the association between battery dead and car starts.

### Positivity for Markov Chains

If three variables form a Markov chain $X\to Y\to Z$ , then $I(X;Y,Z)=H(X)-H(X|Y,Z)=H(X)-H(X|Y)=I(X;Y),$ so

$I(X;Y;Z)=I(X;Y)-I(X;Y|Z)=I(X;Y,Z)-I(X;Y|Z)=I(X;Z)\geq 0.$ ## Bounds

The bounds for the 3-variable case are

$-\min \ \{I(X;Y|Z),I(Y;Z|X),I(X;Z|Y)\}\leq I(X;Y;Z)\leq \min \ \{I(X;Y),I(Y;Z),I(X;Z)\}$ ## Difficulties

A complication is that this multivariate mutual information (as well as the interaction information) can be positive, negative, or zero, which makes this quantity difficult to interpret intuitively. In fact, for n random variables, there are $2^{n}-1$ degrees of freedom for how they might be correlated in an information-theoretic sense, corresponding to each non-empty subset of these variables. These degrees of freedom are bounded by the various inequalities in information theory.