Jump to content

Least squares

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Jitse Niesen (talk | contribs) at 15:06, 9 June 2005 (new section: Least squares and regression analysis; remove unrelated "See also"s). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Least squares is a mathematical optimization technique that attempts to find a "best fit" to a set of data by attempting to minimize the sum of the squares of the differences (called residuals) between the fitted function and the data.

The least squares technique is commonly used in curve fitting. Many other optimization problems can also be expressed in a least squares form, by either minimizing energy or maximizing entropy.

Formulation of the problem

Suppose that the data set consists of the points (xi, yi) with i = 1, 2, ..., n. We want to find a function f such that

To attain this goal, we suppose that the function f is of a particular form containing some parameters which need to be determined. For instance, suppose that it is quadratic, meaning that f(x) = ax2 + bx + c, where a, b and c are not yet known. We now seek the values of a, b and c that minimize the sum of the squares of the residuals:

This explains the name least squares.

Solving the least squares problem

In the above example, f is linear in the parameters a, b and c. The problem simplifies considerably in this case and essentially reduces to a system of linear equations. This is explained in the article on linear least squares.

The problem is more difficult if f is not linear in the parameters to be determined. We then need to solve a general (unconstrained) optimization problem. Any algorithm for such problems, like Newton's method and gradient descent, can be used. Another possibility is to apply an algorithm that is developed especially to tackle least squares problems, for instance the Gauss-Newton algorithm or the Levenberg-Marquardt algorithm.

Least squares and regression analysis

In regression analysis, one replaces the relation

by

where the noise term ε is a random variable with mean zero. In linear regression, the function f has the form f(x) = ax + b, with a and b to be determined; the general case is called nonlinear regression.

Again, one frequently estimates the parameters (a and b in the linear case) by least squares: those values are taken that minimize S. The Gauss-Markov theorem states that for linear regression, the least squares estimates are optimal in a certain sense, if the noise terms are independent and identically distributed (see the article for a more precise statement and less restrictive conditions on the noise terms).

  • http://www.physics.csbsju.edu/stats/least_squares.html
  • http://www.zunzun.com
  • http://www.orbitals.com/self/least/least.htm
  • "Least squares". PlanetMath.