Learning rate: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
→‎Further reading: better sources
Line 2: Line 2:
In [[machine learning]] and [[statistics]], the '''learning rate''' is a [[Hyperparameter (machine learning)|tuning parameter]] in an [[Mathematical optimization|optimization algorithm]] that determines the step size at each iteration while moving toward a minimum of a [[loss function]].<ref>{{cite book |first=Kevin P. |last=Murphy |title=Machine Learning: A Probabilistic Perspective |location=Cambridge |publisher=MIT Press |year=2012 |isbn=978-0-262-01802-9 |page=247 |url=https://books.google.com/books?id=NZP6AQAAQBAJ&pg=PA247 }}</ref> Since it influences to what extent newly acquired information overrides old information, it metaphorically represents the speed at which a machine learning model "learns."<ref name="towardsdatascience2">{{Cite web|url=https://towardsdatascience.com/understanding-learning-rates-and-how-it-improves-performance-in-deep-learning-d0d4059c1c10|title=Understanding Learning Rates and How It Improves Performance in Deep Learning|author=Hafidz Zulkifli|date=21 January 2018|work=Towards Data Science|access-date=15 February 2019|quote=Learning rate is a hyper-parameter that controls how much we are adjusting the weights of our network with respect the loss gradient.}}</ref> The learning rate is often denoted by the character η or α.
In [[machine learning]] and [[statistics]], the '''learning rate''' is a [[Hyperparameter (machine learning)|tuning parameter]] in an [[Mathematical optimization|optimization algorithm]] that determines the step size at each iteration while moving toward a minimum of a [[loss function]].<ref>{{cite book |first=Kevin P. |last=Murphy |title=Machine Learning: A Probabilistic Perspective |location=Cambridge |publisher=MIT Press |year=2012 |isbn=978-0-262-01802-9 |page=247 |url=https://books.google.com/books?id=NZP6AQAAQBAJ&pg=PA247 }}</ref> Since it influences to what extent newly acquired information overrides old information, it metaphorically represents the speed at which a machine learning model "learns."<ref name="towardsdatascience2">{{Cite web|url=https://towardsdatascience.com/understanding-learning-rates-and-how-it-improves-performance-in-deep-learning-d0d4059c1c10|title=Understanding Learning Rates and How It Improves Performance in Deep Learning|author=Hafidz Zulkifli|date=21 January 2018|work=Towards Data Science|access-date=15 February 2019|quote=Learning rate is a hyper-parameter that controls how much we are adjusting the weights of our network with respect the loss gradient.}}</ref> The learning rate is often denoted by the character η or α.


In setting a learning rate, there is a trade-off between the rate of convergence and overshooting. While the direction toward the minimum is usually determined from the [[Gradient descent|gradient]] of the loss function, the learning rate determines how big a step is taken in that direction. A too high learning rate will make the learning jump over minima but a too low learning rate will either take too long to converge or get stuck in an undesirable local minimum.
In setting a learning rate, there is a trade-off between the rate of convergence and overshooting. While the direction toward the minimum is usually determined from the [[Gradient descent|gradient]] of the loss function, the learning rate determines how big a step is taken in that direction.<ref>{{cite book |first=Y. |last=Nesterov |title=Introductory Lectures on Convex Optimization: A Basic Course |location=Boston |publisher=Kluwer |year=2004 |isbn=1-4020-7553-7 |page=25 |url=https://books.google.com/books?id=2-ElBQAAQBAJ&pg=PA25 }}</ref> A too high learning rate will make the learning jump over minima but a too low learning rate will either take too long to converge or get stuck in an undesirable local minimum.


In order to achieve faster convergence, prevent oscillations and getting stuck in undesirable local minima the learning rate is often varied during training either in accordance to a learning rate schedule or by using an adaptive learning rate.<ref name="variablelearningrate">{{Cite web|url=https://towardsdatascience.com/learning-rate-schedules-and-adaptive-learning-rate-methods-for-deep-learning-2c8f433990d1|title=Learning Rate Schedules and Adaptive Learning Rate Methods for Deep Learning|author=Suki Lau|date=29 July 2017|work=Towards Data Science|access-date=12 March 2019|quote=In order to achieve faster convergence, prevent oscillations and getting stuck in local minima the learning rate is often varied during training either in accordance to a learning rate schedule or by using an adaptive learning rate.}}</ref> In [[Newton's method in optimization|Newton's method]], the learning rate is essentially determined from the local curvature of the loss function, by using the [[Invertible matrix|inverse]] of the [[Hessian matrix]] as the step size.
In order to achieve faster convergence, prevent oscillations and getting stuck in undesirable local minima the learning rate is often varied during training either in accordance to a learning rate schedule or by using an adaptive learning rate.<ref name="variablelearningrate">{{Cite web|url=https://towardsdatascience.com/learning-rate-schedules-and-adaptive-learning-rate-methods-for-deep-learning-2c8f433990d1|title=Learning Rate Schedules and Adaptive Learning Rate Methods for Deep Learning|author=Suki Lau|date=29 July 2017|work=Towards Data Science|access-date=12 March 2019|quote=In order to achieve faster convergence, prevent oscillations and getting stuck in local minima the learning rate is often varied during training either in accordance to a learning rate schedule or by using an adaptive learning rate.}}</ref> In [[Newton's method in optimization|Newton's method]], the learning rate is essentially determined from the local curvature of the loss function, by using the [[Invertible matrix|inverse]] of the [[Hessian matrix]] as the step size.

Revision as of 22:07, 4 November 2019

In machine learning and statistics, the learning rate is a tuning parameter in an optimization algorithm that determines the step size at each iteration while moving toward a minimum of a loss function.[1] Since it influences to what extent newly acquired information overrides old information, it metaphorically represents the speed at which a machine learning model "learns."[2] The learning rate is often denoted by the character η or α.

In setting a learning rate, there is a trade-off between the rate of convergence and overshooting. While the direction toward the minimum is usually determined from the gradient of the loss function, the learning rate determines how big a step is taken in that direction.[3] A too high learning rate will make the learning jump over minima but a too low learning rate will either take too long to converge or get stuck in an undesirable local minimum.

In order to achieve faster convergence, prevent oscillations and getting stuck in undesirable local minima the learning rate is often varied during training either in accordance to a learning rate schedule or by using an adaptive learning rate.[4] In Newton's method, the learning rate is essentially determined from the local curvature of the loss function, by using the inverse of the Hessian matrix as the step size.

Learning rate schedule

A learning rate schedule changes the learning rate during learning and is most often changed between epochs/iterations. This is mainly done with two parameters: decay and momentum . There are many different learning rate schedules but the most common are time-based, step-based and exponential.[4]

Decay serves to settle the learning in a nice place and avoid oscillations, a situation that may arise when a too high constant learning rate makes the learning jump back and forth over a minima, and is controlled by a hyperparameter.

Momentum is analogous to a ball rolling down a hill; we want the ball to settle at the lowest point of the hill (corresponding to the lowest error). Momentum both speeds up the learning (increasing the learning rate) when the error cost gradient is heading in the same direction for a long time and also avoids local minima by 'rolling over' small bumps. Momentum is controlled by a hyper parameter analogous to a ball's mass which must be chosen manually—too high and the ball will roll over minima which we wish to find, too low and it will not fulfil its purpose. The formula for factoring in the momentum is more complex than for decay but is most often built in with deep learning libraries such as Keras.

Time-based learning schedules alter the learning rate depending on the learning rate of the previous time iteration. Factoring in the decay the mathematical formula for the learning rate is:

where is the learning rate, is a decay parameter and is the iteration step.

Step-based learning schedules changes the learning rate according to some pre defined steps. The decay application formula is here defined as:

where is the learning rate at iteration , is the initial learning rate, is how much the learning rate should change at each drop (0.5 corresponds to a halving) and corresponds to the droprate, or how often the rate should be dropped (10 corresponds to a drop every 10 iterations). The floor function here drops the value of its input to 0 for all values smaller than 1.

Exponential learning schedules are similar to step-based but instead of steps a decreasing exponential function is used. The mathematical formula for factoring in the decay is:

where is a decay parameter.

Adaptive learning rate

The issue with learning rate schedules is that they all depend on hyperparameters that must be manually chosen for each given learning session and may vary greatly depending on the problem at hand or the model used. To combat this there are many different types of adaptive gradient descent algorithms such as Adagrad, Adadelta, RMSprop, Adam which are generally built into deep learning libraries such as Keras.

See also

References

  1. ^ Murphy, Kevin P. (2012). Machine Learning: A Probabilistic Perspective. Cambridge: MIT Press. p. 247. ISBN 978-0-262-01802-9.
  2. ^ Hafidz Zulkifli (21 January 2018). "Understanding Learning Rates and How It Improves Performance in Deep Learning". Towards Data Science. Retrieved 15 February 2019. Learning rate is a hyper-parameter that controls how much we are adjusting the weights of our network with respect the loss gradient.
  3. ^ Nesterov, Y. (2004). Introductory Lectures on Convex Optimization: A Basic Course. Boston: Kluwer. p. 25. ISBN 1-4020-7553-7.
  4. ^ a b Suki Lau (29 July 2017). "Learning Rate Schedules and Adaptive Learning Rate Methods for Deep Learning". Towards Data Science. Retrieved 12 March 2019. In order to achieve faster convergence, prevent oscillations and getting stuck in local minima the learning rate is often varied during training either in accordance to a learning rate schedule or by using an adaptive learning rate.

Further reading

  • Géron, Aurélien (2017). "Gradient Descent". Hands-On Machine Learning with Scikit-Learn and TensorFlow. O'Reilly. pp. 113–124. ISBN 978-1-4919-6229-9. {{cite book}}: External link in |chapterurl= (help); Unknown parameter |chapterurl= ignored (|chapter-url= suggested) (help)
  • Plagianakos, V. P.; Magoulas, G. D.; Vrahatis, M. N. (2001). "Learning Rate Adaptation in Stochastic Gradient Descent". Advances in Convex Analysis and Global Optimization. Kluwer. pp. 433–444. ISBN 0-7923-6942-4. {{cite book}}: External link in |chapterurl= (help); Unknown parameter |chapterurl= ignored (|chapter-url= suggested) (help)

External links

  • de Freitas, Nando (February 12, 2015). "Optimization". Deep Learning Lecture 6. University of Oxford – via YouTube.