# Broyden's method

In numerical analysis, Broyden's method is a quasi-Newton method for the root-finding algorithm in k variables. It was originally described by C. G. Broyden in 1965.[1]

Newton's method for solving $\displaystyle \vec{F}(\vec{x})=\vec{0}$ uses the Jacobian matrix, $\displaystyle J$, at every iteration. However, computing this Jacobian is a difficult and expensive operation. The idea behind Broyden's method is to compute the whole Jacobian only at the first iteration, and to do a rank-one update at the other iterations.

In 1979 Gay proved that when Broyden's method is applied to a linear system of size n x n, it terminates in 2n steps,[2] although like all quasi-Newton methods, it may not converge for nonlinear systems.

## Description of the method

### Solving single variable equation

In the secant method, we replace the first derivative $\displaystyle f'(x_n)$ with the finite difference approximation:

$f'(x_n) \simeq \frac {f(x_n)-f(x_{n-1})}{x_n-x_{n-1} },$

and proceeds similar to Newton's Method ($n$ is the index for the iterations):

$x_{n+1}=x_n-\frac{1}{f'(x_n)} f(x_n) .$

### Solving a set of nonlinear equations

To solve a set of nonlinear equations

$\displaystyle \vec{F}(\vec{x})=\vec{0}$,

where the vector $\vec{F}$ is a function of vector $\vec{x}$ as (if we have $k$ equations):

$\vec{x} = (x_1,x_2,x_3,...,x_k)$
$\vec{F}(\vec{x})= (f_1(x_1,x_2,...,x_k),f_2(x_1,x_2,...,x_k)...,f_k(x_1,x_2,...,x_k))$

For such problems, Broyden gives a generalization of above formula, replacing the derivative $\displaystyle \vec F'$ with the Jacobian $\displaystyle J$. The Jacobian matrix is determined iteratively based on the secant equation with the finite difference approximation:

$J_n \cdot (\vec{x}_n-\vec{x}_{n-1})\simeq \vec{F}(\vec{x}_n)-\vec{F}(\vec{x}_{n-1}),$

where $n$ is the index of iterations. However above equation is under determined in more than one dimension. Broyden suggests using the current estimate of the Jacobian matrix $\displaystyle J_{n-1}$ and improving upon it by taking the solution to the secant equation that is a minimal modification to $\displaystyle J_{n-1}$ (minimal in the sense of minimizing the Frobenius norm $\displaystyle \|J_{n} - J_{n-1}\|_{F}$):

$J_n=J_{n-1}+\frac{\Delta \vec{F}_n-J_{n-1} \Delta \vec{x}_n}{\|\Delta \vec{x}_n\|^2} \Delta \vec{x}_n^T$

where

$\Delta \vec{x} = \vec{x}_{n} - \vec{x}_{n-1}$
$\Delta \vec{F} = \vec{F}_{n} - \vec{F}_{n-1}$

then we proceed in the Newton direction as:

$\vec{x}_{n+1}=\vec{x}_n-J_n^{-1}\vec{F}(\vec{x}_n).$

Broyden also suggested using the Sherman-Morrison formula to update directly the inverse of the Jacobian matrix:

$J_n^{-1}=J_{n-1}^{-1}+\frac{\Delta \vec{x}_n-J^{-1}_{n-1} \Delta \vec{F}_n}{\Delta \vec{x}_n^T J^{-1}_{n-1}\Delta \vec{F}_n} (\Delta \vec{x}_n^T J^{-1}_{n-1})$

This method is commonly known as the "good Broyden's method". A similar technique can be derived by using a slightly different modification to $J_{n-1}$ (which minimizes $\displaystyle \|J^{-1}_{n} - J^{-1}_{n-1}\|_{F}$ instead); this yields the so-called "bad Broyden's method" (but see[3]):

$J_n^{-1}=J_{n-1}^{-1}+\frac{\Delta \vec{x}_n-J^{-1}_{n-1} \Delta \vec{F}_n}{\Delta \vec{F}_n^T \Delta \vec{F}_n} \Delta \vec{F}_n^T$

Many other quasi-Newton schemes have been suggested in optimization, where one seeks a maximum or minimum by finding the root of the first derivative (gradient in multi dimensions). The Jacobian of the gradient is called Hessian and is symmetric, adding further constraints to its upgrade.