Jump to content

Partition of sums of squares

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 86.94.25.18 (talk) at 09:42, 20 March 2009 (Grammar fix and text clean-up (redundant spaces)). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Sum of squares is a concept that permeates much of inferential statistics and descriptive statistics. More properly, it is "the sum of the squared deviations". Mathematically, it is an unscaled, or unadjusted measure of dispersion (also called variability). When scaled for the number of degrees of freedom, it estimates the variance, or spread of the observations about their mean value.

The distance from any point in a collection of data, to the mean of the data, is the deviation. This can be written as , where is the ith data point, and is the estimate of the mean. If all such deviations are squared, then summed, as in , we have the "sum of squares" for these data.

When more data is added to the collection the sum of squares will increase, except in unlikely cases such as the new data being equal to the mean. So usually, the sum of squares will grow with the size of the data collection. That is a manifestation of the fact that it is unscaled.

In many cases, the number of degrees of freedom is simply the number of data in the collection, minus one. We write this as n − 1, where n is the number of data.

Scaling (also known as normalizing) means adjusting the sum of squares so that it does not grow as the size of the data collection grows. This is important when we want to compare samples of different sizes, such as a sample of 100 people compared to a sample of 20 people. If the sum of squares was not normalized, its value would always be larger for the sample of 100 people than for the sample of 20 people. To scale the sum of squares, we divide it by the degrees of freedom, i.e., calculate the sum of squares per degree of freedom, or variance. Standard deviation, in turn, is the square root of the variance.

The above information is how sum of squares is used in descriptive statistics; see the article on total sum of squares for an application of this broad principle to inferential statistics.

Partitioning the sum of squares in linear regression

Theorem. Given a linear regression model including a constant based on a sample containing n observations, the total sum of squares can be partitioned equivalently as follows:

Proof.

The requirement that the model includes a constant or equivalently that the design matrix contains a column of ones ensures that .

See also