Jump to content

Broyden–Fletcher–Goldfarb–Shanno algorithm

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 131.243.240.85 (talk) at 20:30, 19 March 2007 (→‎Algorithm). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Template:Prerequisites header ! Gradient descent | |- ! Newton's method in optimization | Template:Prerequisites footer

In mathematics, the Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is a method to solve an unconstrained nonlinear optimization problem.

The BFGS method is derived from gradient descent. As such, it is a member of a broad class of hill-climbing optimization techniques.

The principal idea of the method is to construct an approximate Hessian matrix of second derivatives of the function to be minimized, by analyzing successive gradient vectors. This approximation of the function's derivatives allows the application of a quasi-Newton fitting method in order to move towards the minimum in the parameter space.

The Hessian matrix does not need to be computed at any stage. However, the method assumes that the function can be locally approximated as a quadratic in the region around the optimum.

Rationale

The search direction at stage k is given by the solution of the analogue of the Newton equation

.

A line search in the direction is then used to find the next point .

Instead of requiring the full Hessian matrix at the point to be computed as , the approximate Hessian at stage k is updated by the addition of two matrices.

Both and are rank-one matrices but have different bases. The rank one assumption here means that we may write

So equivalently, and construct a rank-two update matrix which is robust against the scale problem often sufferred in the gradient descent searching.

(as in Broyden's method, the multidimensional analogue of the secant method). The quasi-Newton condition imposed on this update is

.

Algorithm

From an initial guess and an approximate Hessian matrix the following steps are repeated until converges to the solution.

  1. Obtain by solving: .
  2. Perform a line search to find the optimal in the direction found in the first step, then update .
  3. .
  4. .

denotes the objective function to be minimized. Convergence can be checked by observing the norm of the gradient, . Practically, can be initialized with , so that the first step will be equivalent to a gradient descent, but further steps are more and more refined by , the approximation to the Hessian.

The first step of the algorithm is carried out using an approximate inverse of the matrix , which is usually obtained efficiently by applying the Sherman–Morrison formula to the fourth line of the algorithm, giving

Credible intervals or confidence intervals for the solution can be obtained from the inverse of the final Hessian matrix.

Bibliography

  • Broyden, C. G. Journal of the Institute of Mathematics and Its Applications 1970, 6, 76-90
  • Fletcher, R. Computer Journal 1970, 13, 317
  • Goldfarb, D. Mathematics of Computation 1970, 24, 23
  • Shanno, D. F. Mathematics of Computation 1970, 24, 647
  • Avriel, Mordecai 2003. Nonlinear Programming: Analysis and Methods. Dover Publishing. ISBN 0-486-43227-0.

See also