Jump to content

Dilution (neural networks)

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Olexa Riznyk (talk | contribs) at 17:35, 17 January 2016 (→‎See also: +Convolutional neural network#Dropout). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Dropout is a technique of reducing overfitting in neural networks by preventing complex co-adaptations on training data. It is a very efficient way of performing model averaging with neural networks.[1] The term "dropout" refers to dropping out units (both hidden and visible) in a neural network.[2]

See also

References

  1. ^ "(1207.0580) Improving neural networks by preventing co-adaptation of feature detectors". Arxiv.org. Retrieved July 26, 2015.
  2. ^ "Dropout: A Simple Way to Prevent Neural Networks from Overfitting". Jmlr.org. Retrieved July 26, 2015.