# Correspondence analysis

Correspondence analysis (CA) or reciprocal averaging is a multivariate statistical technique proposed by Herman Otto Hartley (Hirschfeld) and later developed by Jean-Paul Benzécri. It is conceptually similar to principal component analysis, but applies to categorical rather than continuous data. In a similar manner to principal component analysis, it provides a means of displaying or summarising a set of data in two-dimensional graphical form.

All data should be on the same scale for CA to be applicable, keeping in mind that the method treats rows and columns equivalently. It is traditionally applied to contingency tables — CA decomposes the chi-squared statistic associated with this table into orthogonal factors. Because CA is a descriptive technique, it can be applied to tables whether or not the $\chi ^{2}$ statistic is appropriate.

## Details

Like principal components analysis, correspondence analysis creates orthogonal components and, for each item in a table, a set of scores (sometimes called factor scores, see Factor analysis). Correspondence analysis is performed on a contingency table, C, of size m×n where m is the number of rows and n is the number of columns.

### Preprocessing

From table C, compute a set of weights for the columns and the rows (sometimes called masses), where row weights are

$w_{m}={\frac {1}{n_{C}}}C\mathbf {1}$ and column weights are

$w_{n}={\frac {1}{n_{C}}}\mathbf {1} ^{T}C.$ where $n_{C}=\sum _{i=1}^{n}\sum _{j=1}^{m}C_{ij}$ is the sum of C and $\mathbf {1}$ is a column vector of ones with the appropriate dimension.

Next, compute a table S, where C is divided by the sum of C

$S={\frac {1}{n_{C}}}C.$ Finally, compute a table M from S and the weights as such

$M=S-w_{m}w_{n}.$ ### Interpretation of preprocessing

The vectors $w_{m}$ and $w_{n}$ give the marginal probabilities of being the row and column classes, respectively, while $S$ gives the joint probability distribution of rows and columns. Therefore $M$ gives deviations from independence. These deviations, squared and appropriately scaled, are summed up to yield the chi-squared statistic on $C$ .

### Orthogonal components

The table M is then decomposed with the generalized singular value decomposition where the left and right singular vectors are constrained by weights. The weights are diagonal tables

$W_{m}=\operatorname {diag} \{1/w_{m}\}$ and

$W_{n}=\operatorname {diag} \{1/w_{n}\}$ where the diagonal elements of $W_{n}$ are $1/w_{n}$ and the off-diagonal elements are all 0.

M is then decomposed via the generalized singular value decomposition

$M=U\Sigma V^{*}\,$ where

$U^{*}W_{m}U=V^{*}W_{n}V=I.$ ### Factor scores

Factor scores for the row items of table C are

$F_{m}=W_{m}U\Sigma$ and for the column items

$F_{n}=W_{n}V\Sigma .$ ## Extensions and applications

Several variants of CA are available, including detrended correspondence analysis (DCA) and canonical correspondence analysis (CCA). The extension of correspondence analysis to many categorical variables is called multiple correspondence analysis. An adaptation of correspondence analysis to the problem of discrimination based upon qualitative variables (i.e., the equivalent of discriminant analysis for qualitative data) is called discriminant correspondence analysis or barycentric discriminant analysis.

In the social sciences, correspondence analysis, and particularly its extension multiple correspondence analysis, was made known outside France through French sociologist Pierre Bourdieu's application of it.

## Implementations

• The data visualization system Orange include the module: orngCA.
• The statistical system R includes the packages: MASS, ade4, ca, vegan, ExPosition, andFactoMineR which perform correspondence analysis and multiple correspondence analysis.