This is an old revision of this page, as edited by Jitse Niesen at 03:09, 28 October 2006 (→Other algorithms: Laplacian should be Hessian, see talk). The present address (URL) is a permanent link to this revision, which may differ significantly from the .
In mathematics, the Gauss-Newton algorithm is used to solve nonlinear least squares problems. It is a modification of Newton's method that does not use second derivatives, due to Carl Friedrich Gauss.
Given m functions f1, ..., fm of n parameters p1, ..., pn with m≥n, we want to minimize the sum
Here, p stands for the vector (p1, ..., pn).
The Gauss-Newton algorithm is an iterative procedure. This means that the user has to provide an initial guess for the parameter vector p, which we will call p0.
Subsequent guesses pk for the parameter vector are then produced by the recurrence relation
where f=(f1, ..., fm) and Jf(p) denotes the Jacobian of f at p (note that Jf is not square).
The matrix inverse is never computed explicitly in practice. Instead, we use
and we compute the update δk by solving the linear system
A good implementation of the Gauss-Newton algorithm also employs a line search algorithm: instead of the above formula for pk+1, we use
where the number αk is in some sense optimal.
The recurrence relation for Newton's method for minimizing a function S is
yields the recurrence relation
We can conclude that the Gauss-Newton method is the same as Newton's method with the Σ f H(f) term ignored.