# Correlogram

A plot showing 100 random numbers with a "hidden" sine function, and an autocorrelation (correlogram) of the series on the bottom.

In the analysis of data, a correlogram is a chart of correlation statistics. For example, in time series analysis, a plot of the sample autocorrelations ${\displaystyle r_{h}\,}$ versus ${\displaystyle h\,}$ (the time lags) is an autocorrelogram. If cross-correlation is plotted, the result is called a cross-correlogram.

The correlogram is a commonly used tool for checking randomness in a data set. If random, autocorrelations should be near zero for any and all time-lag separations. If non-random, then one or more of the autocorrelations will be significantly non-zero.

In addition, correlograms are used in the model identification stage for Box–Jenkins autoregressive moving average time series models. Autocorrelations should be near-zero for randomness; if the analyst does not check for randomness, then the validity of many of the statistical conclusions becomes suspect. The correlogram is an excellent way of checking for such randomness.

In multivariate analysis, correlation matrices shown as color-mapped images may also be called "correlograms" or "corrgrams".[1][2][3]

## Applications

The correlogram can help provide answers to the following questions:[4]

• Are the data random?
• Is an observation related to an adjacent observation?
• Is an observation related to an observation twice-removed? (etc.)
• Is the observed time series white noise?
• Is the observed time series sinusoidal?
• Is the observed time series autoregressive?
• What is an appropriate model for the observed time series?
• Is the model
${\displaystyle Y={\text{constant}}+{\text{error}}}$
valid and sufficient?
• Is the formula ${\displaystyle s_{\bar {Y}}=s/{\sqrt {N}}}$ valid?

## Importance

Randomness (along with fixed model, fixed variation, and fixed distribution) is one of the four assumptions that typically underlie all measurement processes. The randomness assumption is critically important for the following three reasons:

• Most standard statistical tests depend on randomness. The validity of the test conclusions is directly linked to the validity of the randomness assumption.
• Many commonly used statistical formulae depend on the randomness assumption, the most common formula being the formula for determining the standard error of the sample mean:
${\displaystyle s_{\bar {Y}}=s/{\sqrt {N}}}$

where s is the standard deviation of the data. Although heavily used, the results from using this formula are of no value unless the randomness assumption holds.

• For univariate data, the default model is
${\displaystyle Y={\text{constant}}+{\text{error}}}$

If the data are not random, this model is incorrect and invalid, and the estimates for the parameters (such as the constant) become nonsensical and invalid.

## Estimation of autocorrelations

The autocorrelation coefficient at lag h is given by

${\displaystyle r_{h}=c_{h}/c_{0}\,}$

where ch is the autocovariance function

${\displaystyle c_{h}={\frac {1}{N}}\sum _{t=1}^{N-h}\left(Y_{t}-{\bar {Y}}\right)\left(Y_{t+h}-{\bar {Y}}\right)}$

and c0 is the variance function

${\displaystyle c_{0}={\frac {1}{N}}\sum _{t=1}^{N}\left(Y_{t}-{\bar {Y}}\right)^{2}}$

The resulting value of rh will range between −1 and +1.

### Alternate estimate

Some sources may use the following formula for the autocovariance function:

${\displaystyle c_{h}={\frac {1}{N-h}}\sum _{t=1}^{N-h}\left(Y_{t}-{\bar {Y}}\right)\left(Y_{t+h}-{\bar {Y}}\right)}$

Although this definition has less bias, the (1/N) formulation has some desirable statistical properties and is the form most commonly used in the statistics literature. See pages 20 and 49–50 in Chatfield for details.

In contrast to the definition above, this definition allows us to compute ${\displaystyle c_{h}}$ in a slightly more intuitive way. Consider the sample ${\displaystyle Y_{1},\dots ,Y_{N}}$, where ${\displaystyle Y_{i}\in \mathbb {R} ^{n}}$ for ${\displaystyle i=1,\dots ,N}$. Then, let

${\displaystyle X={\begin{bmatrix}Y_{1}-{\bar {Y}}&\cdots &Y_{N}-{\bar {Y}}\end{bmatrix}}\in \mathbb {R} ^{n\times N}}$

We then compute the Gram matrix ${\displaystyle Q=X^{\top }X}$. Finally, ${\displaystyle c_{h}}$ is computed as the sample mean of the ${\displaystyle h}$th diagonal of ${\displaystyle Q}$. For example, the ${\displaystyle 0}$th diagonal (the main diagonal) of ${\displaystyle Q}$ has ${\displaystyle N}$ elements, and its sample mean corresponds to ${\displaystyle c_{0}}$. The ${\displaystyle 1}$st diagonal (to the right of the main diagonal) of ${\displaystyle Q}$ has ${\displaystyle N-1}$ elements, and its sample mean corresponds to ${\displaystyle c_{1}}$, and so on.

## Statistical inference with correlograms

Correlogram example from 400-point sample of a first-order autoregressive process with 0.75 correlation of adjacent points, along with the 95% confidence intervals (plotted about the correlation estimates in black and about zero in red), as calculated by the equations in this section. The dashed blue line shows the actual autocorrelation function of the sampled process.
20 correlograms from 400-point samples of the same random process as in the previous figure.

In the same graph one can draw upper and lower bounds for autocorrelation with significance level ${\displaystyle \alpha \,}$:

${\displaystyle B=\pm z_{1-\alpha /2}SE(r_{h})\,}$ with ${\displaystyle r_{h}\,}$ as the estimated autocorrelation at lag ${\displaystyle h\,}$.

If the autocorrelation is higher (lower) than this upper (lower) bound, the null hypothesis that there is no autocorrelation at and beyond a given lag is rejected at a significance level of ${\displaystyle \alpha \,}$. This test is an approximate one and assumes that the time-series is Gaussian.

In the above, z1−α/2 is the quantile of the normal distribution; SE is the standard error, which can be computed by Bartlett's formula for MA() processes:

${\displaystyle SE(r_{1})={\frac {1}{\sqrt {N}}}}$
${\displaystyle SE(r_{h})={\sqrt {\frac {1+2\sum _{i=1}^{h-1}r_{i}^{2}}{N}}}}$ for ${\displaystyle h>1.\,}$

In the example plotted, we can reject the null hypothesis that there is no autocorrelation between time-points which are separated by lags up to 4. For most longer periods one cannot reject the null hypothesis of no autocorrelation.

Note that there are two distinct formulas for generating the confidence bands:

1. If the correlogram is being used to test for randomness (i.e., there is no time dependence in the data), the following formula is recommended:

${\displaystyle \pm {\frac {z_{1-\alpha /2}}{\sqrt {N}}}}$

where N is the sample size, z is the quantile function of the standard normal distribution and α is the significance level. In this case, the confidence bands have fixed width that depends on the sample size.

2. Correlograms are also used in the model identification stage for fitting ARIMA models. In this case, a moving average model is assumed for the data and the following confidence bands should be generated:

${\displaystyle \pm z_{1-\alpha /2}{\sqrt {{\frac {1}{N}}\left(1+2\sum _{i=1}^{k}r_{i}^{2}\right)}}}$

where k is the lag. In this case, the confidence bands increase as the lag increases.

## Sum of Sample Autocorrelation Function

Hassani’s -1/2[5] theorem indicates that the sum of the sample ACF is always -1/2 [6]for any stationary time series with arbitrary length, making any diagnostic or analysis procedures, including those in the time and frequency domain.

The sum of the sample ACF, ${\displaystyle S_{ACF}}$, at lag ${\displaystyle h\geq 1}$ is always ${\displaystyle {\frac {-1}{2}}}$ for any stationary time series with arbitrary length ${\displaystyle T\geq 2}$[6][7]; that is:

${\displaystyle S_{ACF}=\sum _{h=1}^{T-1}{\hat {\rho }}_{h}={\frac {-1}{2}}}$

## Software

Correlograms are available in most general purpose statistical libraries.

Correlograms:

• python pandas: pandas.plotting.autocorrelation_plot[8]
• R: functions acf and pacf

Corrgrams:

Sum of Sample Autocorrelation Function:

## References

1. ^ Friendly, Michael (19 August 2002). "Corrgrams: Exploratory displays for correlation matrices" (PDF). The American Statistician. Taylor & Francis. 56 (4): 316–324. doi:10.1198/000313002533. Retrieved 19 January 2014.
2. ^ a b "CRAN – Package corrgram". cran.r-project.org. 29 August 2013. Retrieved 19 January 2014.
3. ^ a b "Quick-R: Correlograms". statmethods.net. Retrieved 19 January 2014.
4. ^ "1.3.3.1. Autocorrelation Plot". www.itl.nist.gov. Retrieved 20 August 2018.
5. ^ Hassani, Hossein; Yeganegi, Mohammad Reza. "Sum of squared ACF and the Ljung–Box statistics". Physica A. 520: 81–86 – via Elsevier Science Direct.
6. ^ a b Hassani, Hossein. "Sum of the sample autocorrelation function". Random Operators and Stochastic Equations. 17 (2): 125–130.
7. ^ Hassani, Hossein (15 April 2010). "A note on the sum of the sample autocorrelation function". Physica A: Statistical Mechanics and its Applications. 389 (8): 1601–1606. doi:10.1016/j.physa.2009.12.050. ISSN 0378-4371.
8. ^
9. ^ "Hassani.SACF". CRAN.{{cite web}}: CS1 maint: url-status (link)