Whitening transformation

From Wikipedia, the free encyclopedia
Jump to: navigation, search

A whitening transformation is a decorrelation transformation that transforms an arbitrary set of variables having a known covariance matrix M into a set of new variables whose covariance is the identity matrix (meaning that they are uncorrelated and all have variance 1).

The transformation is called "whitening" because it changes the input vector into a white noise vector. It differs from a general decorrelation transformation in that the latter only makes the covariances equal to zero, so that the correlation matrix may be any diagonal matrix.

The inverse coloring transformation transforms a vector Y of uncorrelated variables (a white random vector) into a vector X with a specified covariance matrix.

Definition[edit]

Suppose X is a random (column) vector with covariance matrix M and mean 0. Typically (when M is not singular) whitening X means multiplying by M^{-1/2}.

The matrix M can be written as the expected value of the outer product of X with itself, namely:

 M = \operatorname{E}[X X^T]

When M is symmetric and positive definite (and therefore not singular), it has a positive definite symmetric square root M^{1/2}, such that  M^{1/2}M^{1/2} =  M. Since M is positive definite, M^{1/2} is invertible, and the vector Y = M^{-1/2}X has covariance matrix:

 \operatorname{Cov}(Y) = \operatorname{E}[Y Y^T] = M^{-1/2} \operatorname{E}[X X^T]  (M^{-1/2})^T = M^{-1/2} M M^{-1/2} = I

and is therefore a white random vector.

If M is singular (and hence not positive definite), then M^{1/2} is not invertible, and it is impossible to map X to a white vector with the same number of components. In that case the vector X can still be mapped to a smaller white vector Y with m elements, where m is the number of non-zero eigenvalues of M.

See also[edit]

References[edit]

External links[edit]