In probability theory and statistics, covariance is a measure of how much two random variables change together. If the greater values of one variable mainly correspond with the greater values of the other variable, and the same holds for the smaller values, i.e., the variables tend to show similar behavior, the covariance is positive. In the opposite case, when the greater values of one variable mainly correspond to the smaller values of the other, i.e., the variables tend to show opposite behavior, the covariance is negative. The sign of the covariance therefore shows the tendency in the linear relationship between the variables. The magnitude of the covariance is not easy to interpret. The normalized version of the covariance, the correlation coefficient, however, shows by its magnitude the strength of the linear relation.
A distinction must be made between (1) the covariance of two random variables, which is a population parameter that can be seen as a property of the joint probability distribution, and (2) the sample covariance, which serves as an estimated value of the parameter.
- 1 Definition
- 2 Properties
- 3 Calculating the sample covariance
- 4 Comments
- 5 Applications
- 6 See also
- 7 References
- 8 External links
where E[x] is the expected value of x, also known as the mean of x. By using the linearity property of expectations, this can be simplified to
However, when , this last equation is prone to catastrophic cancellation when computed with floating point arithmetic and thus should be avoided in computer programs when the data has not been centered before.
For random vectors and (of dimension m and n respectively) the m×n cross covariance matrix (also known as dispersion matrix or variance–covariance matrix, or simply called covariance matrix) is equal to
where mT is the transpose of the vector (or matrix) m.
The (i,j)-th element of this matrix is equal to the covariance Cov(xi, yj) between the i-th scalar component of x and the j-th scalar component of y. In particular, Cov(y, x) is the transpose of Cov(x, y).
For a vector of m jointly distributed random variables with finite second moments, its covariance matrix is defined as
Random variables whose covariance is zero are called uncorrelated.
The units of measurement of the covariance Cov(x, y) are those of x times those of y. By contrast, correlation coefficients, which depend on the covariance, are a dimensionless measure of linear dependence. (In fact, correlation coefficients can simply be understood as a normalized version of covariance.)
- Variance is a special case of the covariance when the two variables are identical:
- If x, y, w, and v are real-valued random variables and a, b, c, d are constant ("constant" in this context means non-random), then the following facts are a consequence of the definition of covariance:
For a sequence x1, ..., xn of random variables, and constants a1, ..., an, we have
A more general identity for covariance matrices
Let be a random vector with covariance matrix , and let be a matrix that can act on . The covariance matrix of the vector is:
If x and y are independent, then their covariance is zero. This follows because under independence,
The converse, however, is not generally true. For example, let x be uniformly distributed in [-1, 1] and let y = x2. Clearly, x and y are dependent, but
In this case, the relationship between y and x is non-linear, while correlation and covariance are measures of linear dependence between two variables. This example shows that if two variables are uncorrelated, that does not in general imply that they are independent. However, if two variables are jointly normally distributed (but not if they are merely individually normally distributed), uncorrelatedness does imply independence.
Relationship to inner products
Many of the properties of covariance can be extracted elegantly by observing that it satisfies similar properties to those of an inner product:
- bilinear: for constants a and b and random variables x, y, z, σ(ax + by, z) = a σ(x, z) + b σ(y, z);
- symmetric: σ(x, y) = σ(y, x);
- positive semi-definite: σ2(x) = σ(x, x) ≥ 0 for all random variables x, and σ(x, x) = 0 implies that x is a constant random variable (K).
In fact these properties imply that the covariance defines an inner product over the quotient vector space obtained by taking the subspace of random variables with finite second moment and identifying any two that differ by a constant. (This identification turns the positive semi-definiteness above into positive definiteness.) That quotient vector space is isomorphic to the subspace of random variables with finite second moment and mean zero; on that subspace, the covariance is exactly the L2 inner product of real-valued functions on the sample space.
As a result for random variables with finite variance, the inequality
holds via the Cauchy–Schwarz inequality.
Proof: If σ2(y) = 0, then it holds trivially. Otherwise, let random variable
Then we have
Calculating the sample covariance
The sample covariance of N observations of K variables is the K-by-K matrix with the entries
which is an estimate of the covariance between variable j and variable k.
The sample mean and the sample covariance matrix are unbiased estimates of the mean and the covariance matrix of the random vector , a row vector whose jth element (j = 1, ..., K) is one of the random variables. The reason the sample covariance matrix has in the denominator rather than is essentially that the population mean is not known and is replaced by the sample mean . If the population mean is known, the analogous unbiased estimate is given by
The covariance is sometimes called a measure of "linear dependence" between the two random variables. That does not mean the same thing as in the context of linear algebra (see linear dependence). When the covariance is normalized, one obtains the correlation coefficient. From it, one can obtain the Pearson coefficient, which gives us the goodness of the fit for the best possible linear function describing the relation between the variables. In this sense covariance is a linear gauge of dependence.
In genetics and molecular biology
Covariance is an important measure in biology. Certain sequences of DNA are conserved more than others among species, and thus to study secondary and tertiary structures of proteins, or of RNA structures, we compare sequences in closely related species. If we find sequence changes or no changes at all in noncoding RNA (such as microRNA), we can find out about which sequences are necessary for common structural motifs, such as an RNA loop.
In financial economics
Covariances play a key role in financial economics, especially in portfolio theory and in the capital asset pricing model. Covariances among various assets' returns are used to determine, under certain assumptions, the relative amounts of different assets that investors should (in a normative analysis) or are predicted to (in a positive analysis) choose to hold in a context of diversification.
|This article needs additional citations for verification. (December 2010)|
||This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (December 2010)|
- Oxford Dictionary of Statistics, Oxford University Press, 2002, p. 104.
- Donald E. Knuth (1998). The Art of Computer Programming, volume 2: Seminumerical Algorithms, 3rd edn., p. 232. Boston: Addison-Wesley.
- W. J. Krzanowski, Principles of Multivariate Analysis, Chap. 7.1, Oxford University Press, New York, 1988
|Look up covariance in Wiktionary, the free dictionary.|
- Hazewinkel, Michiel, ed. (2001), "Covariance", Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4
- MathWorld page on calculating the sample covariance
- Covariance Tutorial using R