Broyden's method: Difference between revisions
Lukeskymuh (talk | contribs) Modifications of the Broyden method added |
|||
Line 106: | Line 106: | ||
Many other quasi-Newton schemes have been suggested in [[Optimization (mathematics)|optimization]], where one seeks a maximum or minimum by finding the root of the first derivative ([[gradient]] in multi dimensions). The Jacobian of the gradient is called [[Hessian matrix|Hessian]] and is symmetric, adding further constraints to its update. |
Many other quasi-Newton schemes have been suggested in [[Optimization (mathematics)|optimization]], where one seeks a maximum or minimum by finding the root of the first derivative ([[gradient]] in multi dimensions). The Jacobian of the gradient is called [[Hessian matrix|Hessian]] and is symmetric, adding further constraints to its update. |
||
== Modificatons == |
|||
Schuberts or sparse Broyden algorithm - a modification for sparse jacoboian matices.<ref>{{Cite journal|title = Modification of a quasi-Newton method for nonlinear equations with a sparse Jacobian|url = http://www.ams.org/mcom/1970-24-109/S0025-5718-1970-0258276-9/|journal = Mathematics of Computation|date = 1970-01-01|issn = 0025-5718|pages = 27–30|volume = 24|issue = 109|doi = 10.1090/S0025-5718-1970-0258276-9|first = L. K.|last = Schubert}}</ref> |
|||
Klement algorithm - uses less iterations to solve many equation systems.<ref>{{Cite journal|title = On Using Quasi-Newton Algorithms of the Broyden Class for Model-to-Test Correlation|url = http://www.jatm.com.br/ojs/index.php/jatm/article/view/373|journal = Journal of Aerospace Technology and Management|date = 2014-11-23|issn = 2175-9146|pages = 407–414|volume = 6|issue = 4|doi = 10.5028/jatm.v6i4.373|language = en|first = Jan|last = Klement}}</ref><ref>{{Cite web|title = Broyden class methods - File Exchange - MATLAB Central|url = http://www.mathworks.com/matlabcentral/fileexchange/55251-broyden-class-methods|website = www.mathworks.com|access-date = 2016-02-04}}</ref> |
|||
==See also== |
==See also== |
Revision as of 22:08, 4 February 2016
This article may be too technical for most readers to understand.(February 2011) |
In numerical analysis, Broyden's method is a quasi-Newton method for finding roots in k variables. It was originally described by C. G. Broyden in 1965.[1]
Newton's method for solving f(x) = 0 uses the Jacobian matrix, J, at every iteration. However, computing this Jacobian is a difficult and expensive operation. The idea behind Broyden's method is to compute the whole Jacobian only at the first iteration, and to do a rank-one update at the other iterations.
In 1979 Gay proved that when Broyden's method is applied to a linear system of size n × n, it terminates in 2 n steps,[2] although like all quasi-Newton methods, it may not converge for nonlinear systems.
Description of the method
Solving single variable equation
In the secant method, we replace the first derivative f′ at xn with the finite difference approximation:
and proceed similar to Newton's Method:
where n is the iteration index.
Solving a system of nonlinear equations
To solve a system of k nonlinear equations
where f is a vector-valued function of vector x:
For such problems, Broyden gives a generalization of the one-dimensional Newton's method, replacing the derivative with the Jacobian J. The Jacobian matrix is determined iteratively based on the secant equation in the finite difference approximation:
where n is the iteration index. For clarity, let us define:
so the above may be rewritten as:
The above equation is underdetermined when k is greater than one. Broyden suggests using the current estimate of the Jacobian matrix Jn − 1 and improving upon it by taking the solution to the secant equation that is a minimal modification to Jn − 1:
This minimizes the following Frobenius norm:
We may then proceed in the Newton direction:
Broyden also suggested using the Sherman-Morrison formula to update directly the inverse of the Jacobian matrix:
This first method is commonly known as the "good Broyden's method".
A similar technique can be derived by using a slightly different modification to Jn − 1. This yields a second method, the so-called "bad Broyden's method" (but see[3]):
This minimizes a different Frobenius norm:
Many other quasi-Newton schemes have been suggested in optimization, where one seeks a maximum or minimum by finding the root of the first derivative (gradient in multi dimensions). The Jacobian of the gradient is called Hessian and is symmetric, adding further constraints to its update.
Modificatons
Schuberts or sparse Broyden algorithm - a modification for sparse jacoboian matices.[4]
Klement algorithm - uses less iterations to solve many equation systems.[5][6]
See also
- Secant method
- Newton's method
- Quasi-Newton method
- Newton's method in optimization
- Davidon-Fletcher-Powell formula
- Broyden-Fletcher-Goldfarb-Shanno (BFGS) method
References
- ^ Broyden, C. G. (October 1965). "A Class of Methods for Solving Nonlinear Simultaneous Equations". Mathematics of Computation. 19 (92). American Mathematical Society: 577–593. doi:10.2307/2003941. JSTOR 2003941.
{{cite journal}}
:|access-date=
requires|url=
(help) - ^ Gay, D.M. (August 1979). "Some convergence properties of Broyden's method". SIAM Journal of Numerical Analysis. 16 (4). SIAM: 623–630. doi:10.1137/0716047.
- ^ Kvaalen, Eric (November 1991). "A faster Broyden method". BIT Numerical Mathematics. 31 (2). SIAM: 369–372. doi:10.1007/BF01931297.
- ^ Schubert, L. K. (1970-01-01). "Modification of a quasi-Newton method for nonlinear equations with a sparse Jacobian". Mathematics of Computation. 24 (109): 27–30. doi:10.1090/S0025-5718-1970-0258276-9. ISSN 0025-5718.
- ^ Klement, Jan (2014-11-23). "On Using Quasi-Newton Algorithms of the Broyden Class for Model-to-Test Correlation". Journal of Aerospace Technology and Management. 6 (4): 407–414. doi:10.5028/jatm.v6i4.373. ISSN 2175-9146.
- ^ "Broyden class methods - File Exchange - MATLAB Central". www.mathworks.com. Retrieved 2016-02-04.