Autocovariance

In probability theory and statistics, given a stochastic process ${\displaystyle X=(X_{t})}$, the autocovariance is a function that gives the covariance of the process with itself at pairs of time points. With the usual notation E  for the expectation operator, if the process has the mean function ${\displaystyle \mu _{t}=E[X_{t}]}$, then the autocovariance is given by

${\displaystyle C_{XX}(t,s)={\text{cov}}(X_{t},X_{s})=E[(X_{t}-\mu _{t})(X_{s}-\mu _{s})]=E[X_{t}X_{s}]-\mu _{t}\mu _{s},\,}$

where t and s are two time periods or moments in time.

Autocovariance is closely related to the more commonly used autocorrelation of the process in question.

In the case of a multivariate random vector ${\displaystyle X=(X_{1},X_{2},...,X_{n})}$, the autocovariance becomes a square n by n matrix, ${\displaystyle C_{XX}}$, with entry ${\displaystyle i,j}$ given by ${\displaystyle C_{X_{i}X_{j}}(t,s)={\text{cov}}(X_{i,t},X_{j,s})}$ and commonly referred to as the autocovariance matrix associated with vectors ${\displaystyle X_{t}}$ and ${\displaystyle X_{s}}$.

Weak stationarity

If X(t) is a weakly stationary process, then the following are true:

${\displaystyle \mu _{t}=\mu _{s}=\mu \,}$ for all t, s

and

${\displaystyle C_{XX}(t,s)=C_{XX}(s-t)=C_{XX}(\tau )\,}$

where ${\displaystyle \tau =|s-t|}$ is the lag time, or the amount of time by which the signal has been shifted.

Normalization

When normalizing the autocovariance, C, of a weakly stationary process with its variance, ${\displaystyle C_{XX}(0)=\sigma ^{2}}$, one obtains the autocorrelation coefficient ${\displaystyle \rho }$:[1]

${\displaystyle \rho _{XX}(\tau )={\frac {C_{XX}(\tau )}{\sigma ^{2}}}}$

with ${\displaystyle -1\leq \rho _{XX}(\tau )\leq 1}$.

Properties

The autocovariance of a linearly filtered process ${\displaystyle Y_{t}}$

${\displaystyle Y_{t}=\sum _{k=-\infty }^{\infty }a_{k}X_{t+k}\,}$

is

${\displaystyle C_{YY}(\tau )=\sum _{k,l=-\infty }^{\infty }a_{k}a_{l}C_{XX}(\tau +k-l).\,}$