# Autocovariance

In statistics, given a real stochastic process X(t), the autocovariance is the covariance of the variable against a time-shifted version of itself. If the process has the mean $E[X_t] = \mu_t$, then the autocovariance is given by

$C_{XX}(t,s) = E[(X_t - \mu_t)(X_s - \mu_s)] = E[X_t X_s] - \mu_t \mu_s.\,$

where E is the expectation operator.

Autocovariance is related to the more commonly used autocorrelation by the variance of the variable in question.

## Stationarity

If X(t) is stationary process, then the following are true:

$\mu_t = \mu_s = \mu \,$ for all t, s

and

$C_{XX}(t,s) = C_{XX}(s-t) = C_{XX}(\tau)\,$

where

$\tau = s - t\,$

is the lag time, or the amount of time by which the signal has been shifted.

As a result, the autocovariance becomes

$C_{XX}(\tau) = E[(X(t) - \mu)(X(t+\tau) - \mu)]\,$
$= E[X(t) X(t+\tau)] - \mu^2\,$
$= R_{XX}(\tau) - \mu^2,\,$

## Normalization

When normalized by dividing by the variance σ2, the autocovariance C becomes the autocorrelation coefficient function c,[1]

$c_{XX}(\tau) = \frac{C_{XX}(\tau)}{\sigma^2}.\,$

However, often the autocovariance is called autocorrelation even if this normalization has not been performed.

The autocovariance can be thought of as a measure of how similar a signal is to a time-shifted version of itself with an autocovariance of σ2 indicating perfect correlation at that lag. The normalization with the variance will put this into the range [−1, 1].

## Properties

The autocovariance of a linearly filtered process $Y_t$

$Y_t = \sum_{k=-\infty}^\infty a_k X_{t+k}\,$
is $C_{YY}(\tau) = \sum_{k,l=-\infty}^\infty a_k a^*_l C_{XX}(\tau+k-l).\,$