Gradient descent

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Gradient descent is a first-order iterative optimization algorithm for finding a local minimum of a differentiable function. To find a local minimum of a function using gradient descent, we take steps proportional to the negative of the gradient (or approximate gradient) of the function at the current point. But if we instead take steps proportional to the positive of the gradient, we approach a local maximum of that function; the procedure is then known as gradient ascent. Gradient descent is generally attributed to Cauchy, who first suggested it in 1847,[1] but its convergence properties for non-linear optimization problems were first studied by Haskell Curry in 1944.[2]


Illustration of gradient descent on a series of level sets

Gradient descent is based on the observation that if the multi-variable function is defined and differentiable in a neighborhood of a point , then decreases fastest if one goes from in the direction of the negative gradient of at . It follows that, if

for small enough, then . In other words, the term is subtracted from because we want to move against the gradient, toward the local minimum. With this observation in mind, one starts with a guess for a local minimum of , and considers the sequence such that

We have a monotonic sequence

so hopefully the sequence converges to the desired local minimum. Note that the value of the step size is allowed to change at every iteration. With certain assumptions on the function (for example, convex and Lipschitz) and particular choices of (e.g., chosen either via a line search that satisfies the Wolfe conditions, or the Barzilai–Borwein method[3][4] shown as following),

convergence to a local minimum can be guaranteed. When the function is convex, all local minima are also global minima, so in this case gradient descent can converge to the global solution.

This process is illustrated in the adjacent picture. Here is assumed to be defined on the plane, and that its graph has a bowl shape. The blue curves are the contour lines, that is, the regions on which the value of is constant. A red arrow originating at a point shows the direction of the negative gradient at that point. Note that the (negative) gradient at a point is orthogonal to the contour line going through that point. We see that gradient descent leads us to the bottom of the bowl, that is, to the point where the value of the function is minimal.

An analogy for understanding gradient descent[edit]

Fog in the mountains

The basic intuition behind gradient descent can be illustrated by a hypothetical scenario. A person is stuck in the mountains and is trying to get down (i.e. trying to find the global minimum). There is heavy fog such that visibility is extremely low. Therefore, the path down the mountain is not visible, so they must use local information to find the minimum. They can use the method of gradient descent, which involves looking at the steepness of the hill at their current position, then proceeding in the direction with the steepest descent (i.e. downhill). If they were trying to find the top of the mountain (i.e. the maximum), then they would proceed in the direction of steepest ascent (i.e. uphill). Using this method, they would eventually find their way down the mountain or possibly get stuck in some hole (i.e. local minimum or saddle point), like a mountain lake. However, assume also that the steepness of the hill is not immediately obvious with simple observation, but rather it requires a sophisticated instrument to measure, which the person happens to have at the moment. It takes quite some time to measure the steepness of the hill with the instrument, thus they should minimize their use of the instrument if they wanted to get down the mountain before sunset. The difficulty then is choosing the frequency at which they should measure the steepness of the hill so not to go off track.

In this analogy, the person represents the algorithm, and the path taken down the mountain represents the sequence of parameter settings that the algorithm will explore. The steepness of the hill represents the slope of the error surface at that point. The instrument used to measure steepness is differentiation (the slope of the error surface can be calculated by taking the derivative of the squared error function at that point). The direction they choose to travel in aligns with the gradient of the error surface at that point. The amount of time they travel before taking another measurement is the learning rate of the algorithm. See Backpropagation § Limitations for a discussion of the limitations of this type of "hill descending" algorithm.


Gradient descent has problems with pathological functions such as the Rosenbrock function shown here.

The Rosenbrock function has a narrow curved valley which contains the minimum. The bottom of the valley is very flat. Because of the curved flat valley the optimization is zigzagging slowly with small step sizes towards the minimum.


The zigzagging nature of the method is also evident below, where the gradient descent method is applied to

The gradient descent algorithm in action. (1: contour)The gradient descent algorithm in action. (2: surface)

Choosing the step size and descent direction[edit]

Since using a step size that is too small would slow convergence, and a too large would lead to divergence, finding a good setting of is an important practical problem. Philip Wolfe also advocated for using "clever choices of the [descent] direction" in practice.[5] Whilst using a direction that deviates from the steepest descent direction may seem counter-intuitive, the idea is that the smaller slope may be compensated for by being sustained over a much longer distance.

To reason about this mathematically, let's use a direction and step size and consider the more general update:


Finding good settings of and requires a little thought. First of all, we would like the update direction to point downhill. Mathematically, letting denote the angle between and , this requires that To say more, we need more information about the objective function that we are optimising. Under the fairly weak assumption that is continuously differentiable, we may prove that:[6]






This inequality implies that the amount by which we can be sure the function is decreased depends on a trade off between the two terms in square brackets. The first term in square brackets measures the angle between the descent direction and the negative gradient. The second term measures how quickly the gradient changes along the descent direction.

In principle inequality (1) could be optimized over and to choose an optimal step size and direction. The problem is that evaluating the second term in square brackets requires evaluating , and extra gradient evaluations are generally expensive and undesirable. Some ways around this problem are:

  • Forgo the benefits of a clever descent direction by setting , and use line search to find a suitable step-size , such as one that satisfies the Wolfe conditions.
  • Assuming that is twice-differentiable, use its Hessian to estimate Then choose and by optimising inequality (1).
  • Assuming that is Lipschitz, use its Lipschitz constant to bound Then choose and by optimising inequality (1).
  • Build a custom model of for . Then choose and by optimising inequality (1).
  • Under stronger assumptions on the function such as convexity, more advanced techniques may be possible.

Usually by following one of the recipes above, convergence to a local minimum can be guaranteed. When the function is convex, all local minima are also global minima, so in this case gradient descent can converge to the global solution.

Solution of a linear system[edit]

The steepest descent algorithm applied to the Wiener filter.[7]

Gradient descent can be used to solve a system of linear equations, reformulated as a quadratic minimization problem, e.g., using linear least squares. The solution of

in the sense of linear least squares is defined as minimizing the function

In traditional linear least squares for real and the Euclidean norm is used, in which case

In this case, the line search minimization, finding the locally optimal step size on every iteration, can be performed analytically, and explicit formulas for the locally optimal are known.[8]

The algorithm is rarely used for solving linear equations, with the conjugate gradient method being one of the most popular alternatives. The number of gradient descent iterations is commonly proportional to the spectral condition number of the system matrix (the ratio of the maximum to minimum eigenvalues of ), while the convergence of conjugate gradient method is typically determined by a square root of the condition number, i.e., is much faster. Both methods can benefit from preconditioning, where gradient descent may require less assumptions on the preconditioner.[9]

Solution of a non-linear system[edit]

Gradient descent can also be used to solve a system of nonlinear equations. Below is an example that shows how to use the gradient descent to solve for three unknown variables, x1, x2, and x3. This example shows one iteration of the gradient descent.

Consider the nonlinear system of equations

Let us introduce the associated function


One might now define the objective function

which we will attempt to minimize. As an initial guess, let us use

We know that

where the Jacobian matrix is given by

We calculate:



An animation showing the first 83 iterations of gradient descent applied to this example. Surfaces are isosurfaces of at current guess , and arrows show the direction of descent. Due to a small and constant step size, the convergence is slow.

Now, a suitable must be found such that

This can be done with any of a variety of line search algorithms. One might also simply guess which gives

Evaluating the objective function at this value, yields

The decrease from to the next step's value of

is a sizable decrease in the objective function. Further steps would reduce its value further, until an approximate solution to the system was found.


Gradient descent works in spaces of any number of dimensions, even in infinite-dimensional ones. In the latter case, the search space is typically a function space, and one calculates the Fréchet derivative of the functional to be minimized to determine the descent direction.[10]

That gradient descent works in any number of dimensions (finite number at least) can be seen as a consequence of the Cauchy-Schwarz inequality. That article proves that the magnitude of the inner (dot) product of two vectors of any dimension is maximized when they are colinear. In the case of gradient descent, that would be when the vector of independent variable adjustments is proportional to the gradient vector of partial derivatives.

The gradient descent can take many iterations to compute a local minimum with a required accuracy, if the curvature in different directions is very different for the given function. For such functions, preconditioning, which changes the geometry of the space to shape the function level sets like concentric circles, cures the slow convergence. Constructing and applying preconditioning can be computationally expensive, however.

The gradient descent can be combined with a line search, finding the locally optimal step size on every iteration. Performing the line search can be time-consuming. Conversely, using a fixed small can yield poor convergence.

Methods based on Newton's method and inversion of the Hessian using conjugate gradient techniques can be better alternatives.[11][12] Generally, such methods converge in fewer iterations, but the cost of each iteration is higher. An example is the BFGS method which consists in calculating on every step a matrix by which the gradient vector is multiplied to go into a "better" direction, combined with a more sophisticated line search algorithm, to find the "best" value of For extremely large problems, where the computer-memory issues dominate, a limited-memory method such as L-BFGS should be used instead of BFGS or the steepest descent.

Gradient descent can be viewed as applying Euler's method for solving ordinary differential equations to a gradient flow. In turn, this equation may be derived as an optimal controller[13] for the control system with given in feedback form .


Gradient descent can be extended to handle constraints by including a projection onto the set of constraints. This method is only feasible when the projection is efficiently computable on a computer. Under suitable assumptions, this method converges. This method is a specific case of the forward-backward algorithm for monotone inclusions (which includes convex programming and variational inequalities).[14]

Fast gradient methods[edit]

Another extension of gradient descent is due to Yurii Nesterov from 1983,[15] and has been subsequently generalized. He provides a simple modification of the algorithm that enables faster convergence for convex problems. For unconstrained smooth problems the method is called the Fast Gradient Method (FGM) or the Accelerated Gradient Method (AGM). Specifically, if the differentiable function is convex and is Lipschitz, and it is not assumed that is strongly convex, then the error in the objective value generated at each step by the gradient descent method will be bounded by . Using the Nesterov acceleration technique, the error decreases at .[16] It is known that the rate for the decrease of the cost function is optimal for first-order optimization methods. Nevertheless, there is the opportunity to improve the algorithm by reducing the constant factor. The optimized gradient method (OGM)[17] reduces that constant by a factor of two and is an optimal first-order method for large-scale problems.[18]

For constrained or non-smooth problems, Nesterov's FGM is called the fast proximal gradient method (FPGM), an acceleration of the proximal gradient method.

The momentum method[edit]

Yet another extension, that reduces the risk of getting stuck in a local minimum, as well as speeds up the convergence considerably in cases where the process would otherwise zig-zag heavily, is the momentum method, which uses a momentum term in analogy to "the mass of Newtonian particles that move through a viscous medium in a conservative force field".[19] This method is often used as an extension to the backpropagation algorithms used to train artificial neural networks.[20][21]

See also[edit]


  1. ^ Lemaréchal, C. (2012). "Cauchy and the Gradient Method" (PDF). Doc Math Extra: 251–254.
  2. ^ Curry, Haskell B. (1944). "The Method of Steepest Descent for Non-linear Minimization Problems". Quart. Appl. Math. 2 (3): 258–261. doi:10.1090/qam/10667.
  3. ^ Barzilai, Jonathan; Borwein, Jonathan M. (1988). "Two-Point Step Size Gradient Methods". IMA Journal of Numerical Analysis. 8 (1): 141–148. doi:10.1093/imanum/8.1.141.
  4. ^ Fletcher, R. (2005). "On the Barzilai–Borwein Method". In Qi, L.; Teo, K.; Yang, X. (eds.). Optimization and Control with Applications. Applied Optimization. 96. Boston: Springer. pp. 235–256. ISBN 0-387-24254-6.
  5. ^ Wolfe, Philip (1969-04-01). "Convergence Conditions for Ascent Methods". SIAM Review. 11 (2): 226–235. doi:10.1137/1011036. ISSN 0036-1445.
  6. ^ Bernstein, Jeremy; Vahdat, Arash; Yue, Yisong; Liu, Ming-Yu (2020-06-12). "On the distance between two neural networks and the stability of learning". arXiv:2002.03432 [cs.LG].
  7. ^ Haykin, Simon S. Adaptive filter theory. Pearson Education India, 2008. - p. 108-142, 217-242
  8. ^ Yuan, Ya-xiang (1999). "Step-sizes for the gradient method" (PDF). AMS/IP Studies in Advanced Mathematics. 42 (2): 785.
  9. ^ Henricus Bouwmeester, Andrew Dougherty, Andrew V Knyazev. Nonsymmetric Preconditioning for Conjugate Gradient and Steepest Descent Methods. Procedia Computer Science, Volume 51, Pages 276-285, Elsevier, 2015.
  10. ^ Akilov, G. P.; Kantorovich, L. V. (1982). Functional Analysis (2nd ed.). Pergamon Press. ISBN 0-08-023036-9.
  11. ^ Press, W. H.; Teukolsky, S. A.; Vetterling, W. T.; Flannery, B. P. (1992). Numerical Recipes in C: The Art of Scientific Computing (2nd ed.). New York: Cambridge University Press. ISBN 0-521-43108-5.
  12. ^ Strutz, T. (2016). Data Fitting and Uncertainty: A Practical Introduction to Weighted Least Squares and Beyond (2nd ed.). Springer Vieweg. ISBN 978-3-658-11455-8.
  13. ^ Ross, I. M. (2019-07-01). "An optimal control theory for nonlinear optimization". Journal of Computational and Applied Mathematics. 354: 39–51. doi:10.1016/ ISSN 0377-0427.
  14. ^ Combettes, P. L.; Pesquet, J.-C. (2011). "Proximal splitting methods in signal processing". In Bauschke, H. H.; Burachik, R. S.; Combettes, P. L.; Elser, V.; Luke, D. R.; Wolkowicz, H. (eds.). Fixed-Point Algorithms for Inverse Problems in Science and Engineering. New York: Springer. pp. 185–212. arXiv:0912.3522. ISBN 978-1-4419-9568-1.
  15. ^ Nesterov, Yu. (2004). Introductory Lectures on Convex Optimization : A Basic Course. Springer. ISBN 1-4020-7553-7.
  16. ^ Vandenberghe, Lieven (2019). "Fast Gradient Methods" (PDF). Lecture notes for EE236C at UCLA.
  17. ^ Kim, D.; Fessler, J. A. (2016). "Optimized First-order Methods for Smooth Convex Minimization". Math. Prog. 151 (1–2): 81–107. arXiv:1406.5468. doi:10.1007/s10107-015-0949-3. PMC 5067109. PMID 27765996. S2CID 207055414.
  18. ^ Drori, Yoel (2017). "The Exact Information-based Complexity of Smooth Convex Minimization". Journal of Complexity. 39: 1–16. arXiv:1606.01424. doi:10.1016/j.jco.2016.11.001. S2CID 205861966.
  19. ^ Qian, Ning (January 1999). "On the momentum term in gradient descent learning algorithms" (PDF). Neural Networks. 12 (1): 145–151. CiteSeerX doi:10.1016/S0893-6080(98)00116-6. PMID 12662723. Archived from the original (PDF) on 8 May 2014. Retrieved 17 October 2014.
  20. ^ "Momentum and Learning Rate Adaptation". Willamette University. Retrieved 17 October 2014.
  21. ^ Geoffrey Hinton; Nitish Srivastava; Kevin Swersky. "The momentum method". Coursera. Retrieved 2 October 2018. Part of a lecture series for the Coursera online course Neural Networks for Machine Learning Archived 2016-12-31 at the Wayback Machine.

Further reading[edit]

External links[edit]