Gauss–Newton algorithm

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Jitse Niesen (talk | contribs) at 03:09, 28 October 2006 (→‎Other algorithms: Laplacian should be Hessian, see talk). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Jump to navigation Jump to search

In mathematics, the Gauss-Newton algorithm is used to solve nonlinear least squares problems. It is a modification of Newton's method that does not use second derivatives, due to Carl Friedrich Gauss.

The problem

Given m functions f1, ..., fm of n parameters p1, ..., pn with mn, we want to minimize the sum

Here, p stands for the vector (p1, ..., pn).

The algorithm

The Gauss-Newton algorithm is an iterative procedure. This means that the user has to provide an initial guess for the parameter vector p, which we will call p0.

Subsequent guesses pk for the parameter vector are then produced by the recurrence relation

where f=(f1, ..., fm) and Jf(p) denotes the Jacobian of f at p (note that Jf is not square).

The matrix inverse is never computed explicitly in practice. Instead, we use

and we compute the update δk by solving the linear system

A good implementation of the Gauss-Newton algorithm also employs a line search algorithm: instead of the above formula for pk+1, we use

where the number αk is in some sense optimal.

Other algorithms

The recurrence relation for Newton's method for minimizing a function S is

where and denote the Jacobian and Hessian of S respectively. Using Newton's method for our function

yields the recurrence relation

We can conclude that the Gauss-Newton method is the same as Newton's method with the Σ f H(f) term ignored.

Other algorithms for solving least squares problems include the Levenberg-Marquardt algorithm and gradient descent.