This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)(Learn how and when to remove this template message)
The BFGS method approximates Newton's method, a class of hill-climbing optimization techniques that seeks a stationary point of a (preferably twice continuously differentiable) function. For such problems, a necessary condition for optimality is that the gradient be zero. Newton's method and the BFGS methods are not guaranteed to converge unless the function has a quadratic Taylor expansion near an optimum. These methods use both the first and second derivatives of the function. However, BFGS has proven to have good performance even for non-smooth optimizations.
In quasi-Newton methods, the Hessian matrix of second derivatives doesn't need to be evaluated directly. Instead, the Hessian matrix is approximated using updates specified by gradient evaluations (or approximate gradient evaluations). Quasi-Newton methods are generalizations of the secant method to find the root of the first derivative for multidimensional problems. In multi-dimensional problems, the secant equation does not specify a unique solution, and quasi-Newton methods differ in how they constrain the solution. The BFGS method is one of the most popular members of this class. Also in common use is L-BFGS, which is a limited-memory version of BFGS that is particularly suited to problems with very large numbers of variables (e.g., >1000). The BFGS-B variant handles simple box constraints.
The search direction pk at stage k is given by the solution of the analogue of the Newton equation
where is an approximation to the Hessian matrix, which is updated iteratively at each stage, and is the gradient of the function evaluated at xk. A line search in the direction pk is then used to find the next point xk+1. Instead of requiring the full Hessian matrix at the point xk+1 to be computed as Bk+1, the approximate Hessian at stage k is updated by the addition of two matrices:
Both Uk and Vk are symmetric rank-one matrices, but their sum is a rank-two update matrix.
The quasi-Newton condition imposed on this update is
From an initial guess and an approximate Hessian matrix the following steps are repeated as converges to the solution:
- Obtain a direction by solving .
- Perform a line search to find an acceptable stepsize in the direction found in the first step, then update .
- Set .
denotes the objective function to be minimized. Convergence can be checked by observing the norm of the gradient, . Practically, can be initialized with , so that the first step will be equivalent to a gradient descent, but further steps are more and more refined by , the approximation to the Hessian.
The first step of the algorithm is carried out using the inverse of the matrix , which can be obtained efficiently by applying the Sherman–Morrison formula to the step 5 of the algorithm, giving
This can be computed efficiently without temporary matrices, recognizing that is symmetric, and that and are scalar, using an expansion such as
In statistical estimation problems (such as maximum likelihood or Bayesian inference), credible intervals or confidence intervals for the solution can be estimated from the inverse of the final Hessian matrix. However, these quantities are technically defined by the true Hessian matrix, and the BFGS approximation may not converge to the true Hessian matrix.
The GSL implements BFGS as gsl_multimin_fdfminimizer_vector_bfgs2. Ceres Solver implements both BFGS and L-BFGS. In SciPy, the scipy.optimize.fmin_bfgs function implements BFGS. It is also possible to run BFGS using any of the L-BFGS algorithms by setting the parameter L to a very large number.
Octave uses BFGS with a double-dogleg approximation to the cubic line search.
A high-precision arithmetic version of BFGS (pBFGS), implemented in C++ and integrated with the high-precision arithmetic package ARPREC is robust against numerical instability (e.g. round-off errors).
BFGS and L-BFGS are also implemented in C as part of the open-source Gnu Regression, Econometrics and Time-series Library (gretl).
- Quasi-Newton methods
- Davidon–Fletcher–Powell formula
- Gradient descent
- Nelder–Mead method
- Levenberg–Marquardt algorithm
- Pattern search (optimization)
- BHHH algorithm
- Lewis, Adrian S.; Overton, Michael (2009), Nonsmooth optimization via BFGS (PDF)
- Nocedal & Wright (2006), page 24
- Byrd, Richard H.; Lu, Peihuang; Nocedal, Jorge; Zhu, Ciyou (1995), "A Limited Memory Algorithm for Bound Constrained Optimization", SIAM Journal on Scientific Computing, 16 (5): 1190–1208, doi:10.1137/0916069
- Avriel, Mordecai (2003), Nonlinear Programming: Analysis and Methods, Dover Publishing, ISBN 0-486-43227-0
- Bonnans, J. Frédéric; Gilbert, J. Charles; Lemaréchal, Claude; Sagastizábal, Claudia A. (2006), Numerical optimization: Theoretical and practical aspects, Universitext (Second revised ed. of translation of 1997 French ed.), Berlin: Springer-Verlag, pp. xiv+490, doi:10.1007/978-3-540-35447-5, ISBN 3-540-35445-X, MR 2265882 French Original: Optimisation numérique: Aspects théoriques et pratiques ISBN 3-540-63183-6
- Broyden, C. G. (1970), "The convergence of a class of double-rank minimization algorithms", Journal of the Institute of Mathematics and Its Applications, 6: 76–90, doi:10.1093/imamat/6.1.76
- Fletcher, R. (1970), "A New Approach to Variable Metric Algorithms", Computer Journal, 13 (3): 317–322, doi:10.1093/comjnl/13.3.317
- Fletcher, Roger (1987), Practical methods of optimization (2nd ed.), New York: John Wiley & Sons, ISBN 978-0-471-91547-8
- Goldfarb, D. (1970), "A Family of Variable Metric Updates Derived by Variational Means", Mathematics of Computation, 24 (109): 23–26, doi:10.1090/S0025-5718-1970-0258249-6
- Luenberger, David G.; Ye, Yinyu (2008), Linear and nonlinear programming, International Series in Operations Research & Management Science, 116 (Third ed.), New York: Springer, pp. xiv+546, ISBN 978-0-387-74502-2, MR 2423726
- Nocedal, Jorge; Wright, Stephen J. (2006), Numerical Optimization (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-30303-1
- Shanno, David F. (July 1970), "Conditioning of quasi-Newton methods for function minimization", Mathematics of Computation, 24 (111): 647–656, doi:10.1090/S0025-5718-1970-0274029-X, MR 42:8905
- Shanno, David F.; Kettler, Paul C. (July 1970), "Optimal conditioning of quasi-Newton methods", Mathematics of Computation, 24 (111): 657–664, doi:10.1090/S0025-5718-1970-0274030-6, MR 42:8906
- Source code of high-precision BFGS A C++ source code of BFGS with high-precision arithmetic