# Line search

Jump to navigation Jump to search

In optimization, the line search strategy is one of two basic iterative approaches to find a local minimum ${\displaystyle \mathbf {x} ^{*}}$ of an objective function ${\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} }$. The other approach is trust region.

The line search approach first finds a descent direction along which the objective function ${\displaystyle f}$ will be reduced and then computes a step size that determines how far ${\displaystyle \mathbf {x} }$ should move along that direction. The descent direction can be computed by various methods, such as gradient descent or quasi-Newton method. The step size can be determined either exactly or inexactly.

## Example use

Here is an example gradient method that uses a line search in step 4.

1. Set iteration counter ${\displaystyle \displaystyle k=0}$, and make an initial guess ${\displaystyle \mathbf {x} _{0}}$ for the minimum
2. Repeat:
3.     Compute a descent direction ${\displaystyle \mathbf {p} _{k}}$
4.     Choose ${\displaystyle \displaystyle \alpha _{k}}$ to 'loosely' minimize ${\displaystyle h(\alpha )=f(\mathbf {x} _{k}+\alpha \mathbf {p} _{k})}$ over ${\displaystyle \alpha \in \mathbb {R} _{+}}$
5.     Update ${\displaystyle \mathbf {x} _{k+1}=\mathbf {x} _{k}+\alpha _{k}\mathbf {p} _{k}}$, and ${\textstyle k=k+1}$
6.     Until ${\displaystyle \|\nabla f(\mathbf {x} _{k+1})\|}$ < tolerance

At the line search step (4) the algorithm might either exactly minimize h, by solving ${\displaystyle h'(\alpha _{k})=0}$, or loosely, by asking for a sufficient decrease in h. One example of the former is conjugate gradient method. The latter is called inexact line search and may be performed in a number of ways, such as a backtracking line search or using the Wolfe conditions.

Like other optimization methods, line search may be combined with simulated annealing to allow it to jump over some local minima.

## Algorithms

### Direct search methods

In this method, the minimum must first be bracketed, so the algorithm must identify points x1 and x2 such that the sought minimum lies between them. The interval is then divided by computing ${\displaystyle f(x)}$ at two internal points, x3 and x4, and rejecting whichever of the two outer points is not adjacent to that of x3 and x4 which has the lowest function value. In subsequent steps, only one extra internal point needs to be calculated. Of the various methods of dividing the interval,[1] golden section search is particularly simple and effective, as the interval proportions are preserved regardless of how the search proceeds:

${\displaystyle {\frac {1}{\varphi }}(x_{2}-x_{1})=x_{4}-x_{1}=x_{2}-x_{3}=\varphi (x_{2}-x_{4})=\varphi (x_{3}-x_{1})=\varphi ^{2}(x_{4}-x_{3})}$

where

${\displaystyle \varphi ={\frac {1}{2}}(1+{\sqrt {5}})\approx 1.618}$

## References

1. ^ Box, M. J.; Davies, D.; Swann, W. H. (1969). Non-Linear optimisation Techniques. Oliver & Boyd.

## Further reading

• Dennis, J. E., Jr.; Schnabel, Robert B. (1983). "Globally Convergent Modifications of Newton's Method". Numerical Methods for Unconstrained Optimization and Nonlinear Equations. Englewood Cliffs: Prentice-Hall. pp. 111–154. ISBN 0-13-627216-9.
• Nocedal, Jorge; Wright, Stephen J. (1999). "Line Search Methods". Numerical Optimization. New York: Springer. pp. 34–63. ISBN 0-387-98793-2.
• Sun, Wenyu; Yuan, Ya-Xiang (2006). "Line Search". Optimization Theory and Methods : Nonlinear Programming. New York: Springer. pp. 71–117. ISBN 0-387-24975-3.