Jump to content

Hyperparameter (machine learning)

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Matthiaspaul (talk | contribs) at 18:12, 19 June 2020 (+link). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In machine learning, a hyperparameter is a parameter whose value is used to control the learning process. By contrast, the values of other parameters (typically node weights) are derived via training.

Hyperparameters can be classified as model hyperparameters, that cannot be inferred while fitting the machine to the training set because they refer to the model selection task, or algorithm hyperparameters, that in principle have no influence on the performance of the model but affect the speed and quality of the learning process. An example of a model hyperparameter is the topology and size of a neural network. Examples of algorithm hyperparameters are learning rate and mini-batch size.[clarification needed]

Different model training algorithms require different hyperparameters, some simple algorithms (such as ordinary least squares regression) require none. Given these hyperparameters, the training algorithm learns the parameters from the data. For instance, LASSO is an algorithm that adds a regularization hyperparameter to ordinary least squares regression, which has to be set before estimating the parameters through the training algorithm.

Considerations

The time required to train and test a model can depend upon the choice of its hyperparameters.[1] A hyperparameter is usually of continuous or integer type, leading to mixed-type optimization problems.[1] The existence of some hyperparameters is conditional upon the value of others, e.g. the size of each hidden layer in a neural network can be conditional upon the number of layers.[1]

Difficulty learnable parameters

Usually, but not always, hyperparameters cannot be learned using well known gradient based methods (such as gradient descent, LBFGS) - which are commonly employed to learn parameters. These hyperparameters are those parameters describing a model representation that cannot be learned by common optimization methods but nonetheless affect the loss function. An example would be the tolerance hyperparameter for errors in support vector machines.

Untrainable parameters

Sometimes, hyperparameters cannot be learned from the training data because they aggressively increase the capacity of a model and can push the loss function to a bad minimum - overfitting to, and picking up noise, in the data - as opposed to correctly mapping the richness of the structure in the data. For example - if we treat the degree of a polynomial equation fitting a regression model as a trainable parameter - this would just raise the degree up until the model perfectly fit the data, giving small training error - but bad generalization performance.

Tunability

Most performance variation can be attributed to just a few hyperparameters.[2][1][3] The tunability of an algorithm, hyperparameter, or interacting hyperparameters is a measure of how much performance can be gained by tuning it.[4] For an LSTM, while the learning rate followed by the network size are its most crucial hyperparameters,[5] batching and momentum have no significant effect on its performance.[6]

Although some research has advocated the use of mini-batch sizes in the thousands, other work has found the best performance with mini-batch sizes between 2 and 32.[7]

Robustness

An inherent stochasticity in learning directly implies that the empirical hyperparameter performance is not necessarily its true performance.[1] Methods that are not robust to simple changes in hyperparameters, random seeds, or even different implementations of the same algorithm cannot be integrated into mission critical control systems without significant simplification and robustification.[8]

Reinforcement learning algorithms, in particular, require measuring their performance over a large number of random seeds, and also measuring their sensitivity to choices of hyperparameters.[8] Their evaluation with a small number of random seeds does not capture performance adequately due to high variance.[8] Some reinforcement learning methods, e.g. DDPG (Deep Deterministic Policy Gradient), are more sensitive to hyperparameter choices than others.[8]

Optimization

Hyperparameter optimization finds a tuple of hyperparameters that yields an optimal model which minimizes a predefined loss function on given test data.[1] The objective function takes a tuple of hyperparameters and returns the associated loss.[1]

Reproducibility

Apart from tuning hyperparameters, machine learning involves storing and organizing the parameters and results, and making sure they are reproducible.[9] In the absence of a robust infrastructure for this purpose, research code often evolves quickly and compromises essential aspects like bookkeeping and reproducibility.[10] Online collaboration platforms for machine learning go further by allowing scientists to automatically share, organize and discuss experiments, data, and algorithms.[11]

A number of relevant services and open source software exist:

Services

Name Interfaces
Comet.ml[12] Python[13]
OpenML[14][11][15][16] REST, Python, Java, R[17]
Weights & Biases[18] Python[19]

Software

Name Interfaces Store
OpenML Docker[14][11][15][16] REST, Python, Java, R[17] MySQL
sacred[9][10] Python[20] file, MongoDB, TinyDB, SQL

See also

References

  1. ^ a b c d e f g Template:Cite article
  2. ^ Leyton-Brown, Kevin; Hoos, Holger; Hutter, Frank (January 27, 2014). "An Efficient Approach for Assessing Hyperparameter Importance": 754–762 – via proceedings.mlr.press. {{cite journal}}: Cite journal requires |journal= (help)
  3. ^ Template:Cite article
  4. ^ Template:Cite article
  5. ^ Greff, K.; Srivastava, R. K.; Koutník, J.; Steunebrink, B. R.; Schmidhuber, J. (October 23, 2017). "LSTM: A Search Space Odyssey". IEEE Transactions on Neural Networks and Learning Systems. 28 (10): 2222–2232. arXiv:1503.04069. doi:10.1109/TNNLS.2016.2582924. PMID 27411231.
  6. ^ Template:Cite article
  7. ^ Template:Cite article
  8. ^ a b c d Template:Cite article
  9. ^ a b Template:Cite article
  10. ^ a b Template:Cite article
  11. ^ a b c Template:Cite article
  12. ^ "Comet.ml – Machine Learning Experiment Management".
  13. ^ Inc, Comet ML. "comet-ml: Supercharging Machine Learning" – via PyPI. {{cite web}}: |last= has generic name (help)
  14. ^ a b Van Rijn, Jan N.; Bischl, Bernd; Torgo, Luis; Gao, Bo; Umaashankar, Venkatesh; Fischer, Simon; Winter, Patrick; Wiswedel, Bernd; Berthold, Michael R.; Vanschoren, Joaquin (2013). "OpenML: A Collaborative Science Platform". Van Rijn, Jan N., et al. "OpenML: A collaborative science platform." Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, Berlin, Heidelberg, 2013. Lecture Notes in Computer Science. Vol. 7908. pp. 645–649. doi:10.1007/978-3-642-40994-3_46. ISBN 978-3-642-38708-1.
  15. ^ a b "Vanschoren, Joaquin, Jan N. van Rijn, and Bernd Bischl. "Taking machine learning research online with OpenML." Proceedings of the 4th International Conference on Big Data, Streams and Heterogeneous Source Mining: Algorithms, Systems, Programming Models and Applications-Volume 41. JMLR. org, 2015" (PDF).
  16. ^ a b "van Rijn, J. N. Massively collaborative machine learning. Diss. 2016". 2016-12-19.
  17. ^ a b "OpenML". GitHub.
  18. ^ "Weights & Biases for Experiment Tracking and Collaboration".{{cite web}}: CS1 maint: url-status (link)
  19. ^ "Monitor your Machine Learning models with PyEnv".{{cite web}}: CS1 maint: url-status (link)
  20. ^ Greff, Klaus (2020-01-03). "sacred: Facilitates automated and reproducible experimental research" – via PyPI.