Batch normalization

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Batch normalization is a technique for improving the performance and stability of artificial neural networks. It is a technique to provide any layer in a neural network with inputs that are zero mean/unit variance.[1] Batch normalization was introduced in a 2015 paper.[2][3] It is used to normalize the input layer by adjusting and scaling the activations.[4]

References[edit]

  1. ^ "Understanding the backward pass through Batch Normalization Layer". kratzert.github.io. Retrieved 24 April 2018.
  2. ^ Ioffe, Sergey; Szegedy, Christian. "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift" (PDF).
  3. ^ "Glossary of Deep Learning: Batch Normalisation". medium.com. Retrieved 24 April 2018.
  4. ^ "Batch normalization in Neural Networks". towardsdatascience.com. Retrieved 24 April 2018.