Jump to content

Linear least squares (mathematics)

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Jmath666 (talk | contribs) at 16:17, 4 April 2008 (→‎Problem statement and solution: moved residuals formula to logical place). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Linear least squares, also known as linear regression, is the form of least squares analysis which is used to find an optimal solution for an overdetermined system of linear equations. The idea behind linear least squares is that in order to obtain good estimates for the parameters of a given linear model one should use more observations than there are parameters to be determined.

Using linear least squares to fit a line through a large number of observations usually gives better results than picking just two points through which to draw the line.

Problem statement and solution

The objective consists of finding the minimum value of a sum of m squared residuals,ri, with respect to a set of n parameters of a model,

Each residual

is defined as the difference between the values of a dependent variable, yi and of the model function,, where xi is an independent variable and is a vector of n parameters. The values of the dependent variable are obtained from experimental measurements.

In linear least squares, if is assumed that the model function depends linearly on the parameters ,

,

with some constants . Note that the value of the constants depends on f and the values xi of the independent variable x. The linearity here means only that the model function is linearly dependent on the parameters ; f may well be a nonlinear function of x.

The minimum value of S occurs when the gradient is zero. Since the model contains n parameters there are n gradient equations.

We have

and

.

Substitution of these expressions into the gradient equations gives

Setting each gradient equation equal to zero and rearranging, the n simultaneous linear equations, the normal equations are obtained.[1]

The normal equations are written in matrix notation as

Solution of the normal equations yields the least squares estimators,, of the parameter values. See example of linear regression for a worked-out numerical example with three parameters.

When the observations are not equally reliable, a weighted sum of squares may be minimized.

Each element of the diagonal weight matrix, W should,ideally, be equal to the reciprocal of the variance of the measurement.[2] The normal equations are then

Straight line fitting

For straight line fitting there are only two normal equations. This means that a complete algebraic solution may be worked out with relative ease. For the model

With unit weights the normal equations are therefore

All the summations go from i=1 to i=m. Each summation can be represented by a single symbol

In terms of these symbols the normal equations become

and the solution, by Cramer's rule is

These expressions have been used in hand calculators because each time a data point is added or removed, the five sums are adjusted and the parameters are recalculated, only seven operations in all. [3] The standard deviations of the parameter estimates (often called their standard errors) are

The correlation coefficient between the parameter estimates is

Example

With a set of observed data points y=2,3,3,4 obtained with the independent variable, x at values -1,0,2,4.

Plot of points and solution.
Plot of points and solution.

Now, the residuals are calculated

and S=0.305. After calculating the standard deviations the final result is obtained.

Note that the error is only quoted to one significant digit.

Computation

Although the algebraic solution of the normal equations can be written as

it is not good practice to invert the normal equations matrix. An exception occurs in numerical smoothing and differentiation where an analytical expression is required.

If the matrix is well-conditioned and positive definite, that is, it has full rank, the normal equations can be solved directly by using the Cholesky decomposition , where R is an upper triangular matrix, giving

The solution is obtained in two stages, a forward substitution, , followed by a backward substitution . Both subtitutions are facilitated by the triangular nature of R.

A slower but more numerically stable method, which still works if X is not full rank, can be obtained by computing the QR decomposition

.

where Q is an orthogonal matrix and R is an upper triangular matrix.One can then solve

A third alternative is to use the singular value decomposition (SVD)[4]

This is effectively another kind of orthogonal decomposition as both U and V are orthogonal. This method is the most computationally intensive, but is particularly useful if the normal equations matrix is very ill-conditioned (i.e. if its condition number multiplied by the machine's relative round-off error is appreciably large). In that case, including the smallest singular values in the inversion merely adds numerical noise to the solution. This can be cured using the truncated SVD approach, giving a more stable and exact answer, by explicitly setting to zero all singular values below a certain threshold and so ignoring them, a process closely related to factor analysis.

Properties of the least-squares estimator

If the experimental errors, , are uncorrelated, have a mean of zero and a constant variance, , the Gauss-Markov theorem states that the least-squares estimator, , has the minimum variance of all estimators that are linear combinations of the observations. In this sense it is the best, or optimal, estimator of the parameters. Note particularly that this property is independent of the statistical distribution function of the errors. In other words, the distribution function of the errors need not be a Normal distribution.

For example, it is easy to show that the arithmetic mean of a set of measurements of a quantity is the least-squares estimator of the value of that quantity. If the conditions of the Gauss-Markov theorem apply, the arithmetic mean is optimal, whatever the distribution of errors of the measurements might be.

However, in the case that the experimental errors do belong to a Normal distribuition, the least-squares estimator is also a maximum likelihood estimator.[5]

These two properties underpin the use of the method of least squares for all types of data fitting, even when the assumptions are not strictly valid.

Limitations

An assumption underlying the treatment given above is that the independent variable, x, is free of error. In practice, the errors on the measurements of the independent variable are usually much smaller than the errors on the dependent variable and can therefore be ignored. When this is not the case, total least squares also known as Errors-in-variables model, or Rigorous least squares, should be used. This can be done by adjusting the weighting scheme to take into account errors on both the dependant and independent variables and then following the standard procedure.[6][7]

In some cases the (weighted) normal equations matrix is ill-conditioned; this occurs when the measurements have only a marginal effect on one or more of the estimated parameters.[8] In these cases, the least squares estimate amplifies the measurement noise and may be grossly inaccurate. Various regularization techniques can be applied in such cases, the most common of which is called Tikhonov regularization. If further information about the parameters is known, for example, a range of possible values of x, then minimax techniques can also be used to increase the stability of the solution.

Another drawback of the least squares estimator is the fact that the norm of the residuals, is minimized, whereas in some cases one is truly interested in obtaining small error in the parameter , e.g., a small value of . However, since is unknown, this quantity cannot be directly minimized. If a prior probability on is known, then a Bayes estimator can be used to minimize the mean squared error, . The least squares method is often applied when no prior is known. Surprisingly, however, better estimators can be constructed, an effect known as Stein's phenomenon. For example, if the measurement error is Gaussian, several estimators are known which dominate, or outperform, the least squares technique; the best known of these is the James-Stein estimator.

Parameter errors, correlation and confidence limits

The parameter values are linear combinations of the observed values

Therefore an expression for the errors on the parameter can be obtained by error propagation from the errors on the observations. Let the variance-covariance matrix for the observations be denoted by M and that of the parameters by M. Then,

When , this simplifies to

When unit weights are used () it is implied that the experimental errors are uncorrelated and all equal: , where is known as the variance of an observation of unit weight, and is an identity matrix. In this case is approximated by , where S is the minimum value of the objective function.

In all cases, the variance of the parameter is given by and the covariance between parameters and is given by . Standard deviation is the square root of variance and the correlation coefficient is given by . These error estimates reflect only random errors in the measurements. The true uncertainty in the parameters is larger due to the presence of systematic errors which, by definition, cannot be quantified. Note that even though the observations may be un-correlated, the parameters are always correlated.

It is often assumed, for want of any concrete evidence, that the error on a parameter belongs to a Normal distribution with a mean of zero and standard deviation . Under that assumption the following confidence limits can be derived.

68% confidence limits,
95% confidence limits,
99% confidence limits,

The assumption is not unreasonable when m>>n. If the experimental errors are normally distributed the parameters will belong to a Student's t-distribution with m-n degrees of freedom. When m>>n Student's t-distribution approximates to a Normal distribution. Note, however, that these confidence limits cannot take systematic error into account. Also, parameter errors should be quoted to one significant figure only, as they are subject to sampling error.[9]

When the number of observations is relatively small, Chebychev's inequality can be used for an upper bound on probabilities, regardless of any assumptions about the distribution of experimental errors: the maximum probabilities that a parameter will be more than 1, 2 or 3 standard deviations away from its expectation value are 100%, 25% and 11% respectively.

Residual values and correlation

The residuals are related to the observations by

The symmetric, idempotent matrix is known in the statistics literature as the hat matrix, . Thus,

where I is an identity matrix. The variance-covariance matrice of the residuals, Mr is given by

This shows that even though the observations may be un-correlated, the residuals are always correlated.

If experimental error follows a normal distribution, then, because of the linear relationship between residuals and observations, so should residuals, [10] but since the observations are only a sample of the population of all possible observations, the residuals should belong to a Student's t-distribution. Studentized residuals are useful in making a statistical test for an outlier when a particular residual appears to be excessively large.

Objective function

The objective function can be written as

since is also symmetric and idempotent. It can be shown from this,[11] that the expected value of S is m-n. Note, however, that this is true only if the weights have been assigned correctly. If unit weights are assumed, the expected value of S is , where is the variance of an observation.

If it is assumed that the residuals belong to a Normal distribution, the objective function, being a sum of weighted squared residuals, will belong to a Chi-square () distribution with m-n degrees of freedom. Some illustrative percentile values of are given in the following table.[12]

m-n
10 9.34 18.3 23.2
25 24.3 37.7 44.3
100 99.3 124 136

These values can be used for a statistical criterion as to the goodness-of-fit. When unit weights are used, the numbers should be divided by the variance of an observation.

Applications

References

  1. ^ The normal equations can also be written as . A geometrical interpretion of these equations is that the residual vector, is orthogonal to the column space of the matrix X.
  2. ^ This implies that the observations are uncorrelated. If the observations are correlated, the expression
    applies. In this case the weight matrix should ideally be equal to the inverse of the variance-covariance matrix of the observations.
  3. ^ Since and an alternative expression can be given for the slope.
    Also, from the first normal equation,
  4. ^ C.L. Lawson and R.J. Hanson, Solving Least Squares Problems, Prentice-Hall,1974
  5. ^ H. Margenau and G.M. Murphy, The Mathematics of Physics and Chemistry, Van Nostrand, 1943, 1956
  6. ^ a b P. Gans, Data fitting in the Chemical Sciences, Wiley, 1992
  7. ^ W.E. Deming, Statistical adjustment of Data, Wiley, 1943
  8. ^ a b When fitting polynomials the normal equations matrix is a Vandermonde matrix. Vandermode matrices become increasingly ill-conditioned as the order of the matrix increases.
  9. ^ J. Mandel, The Statistical Analysis of Experimental Data, Interscience, 1964
  10. ^ K.V. Mardia, J.T. Kent and J.M. Bibby, Multivariate analysis, Academic Press, 1979
  11. ^ W. C. Hamilton, Statistics in Physical Science, The Ronald Press, New York, 1964
  12. ^ M.R. Spiegel, Probability and Statistics, Schaum's Outline Series, McGraw-Hill 1982
  13. ^ F.S. Acton, Analysis of Straight-Line Data, Wiley, 1959
  14. ^ P.G. Guest, Numerical Methods of Curve Fitting,Cambridge University Press, 1961.