Normalization model

From Wikipedia, the free encyclopedia
Jump to: navigation, search

The normalization model[1] is an influential model of responses of neurons in primary visual cortex. David Heeger developed the model in the early 1990s,[2] and later refined it together with Matteo Carandini and J. Anthony Movshon.[3] The model involves a divisive stage. In the numerator is the output of the classical receptive field. In the denominator, a constant plus a measure of local stimulus contrast. Although the normalization model was initially developed to explain responses in the primary visual cortex, normalization is now thought to operate throughout the visual system, and in many other sensory modalities and brain regions, including the representation of odors, the modulatory effects of visual attention, the encoding of value, and the integration of multisensory information. Its presence in such a diversity of neural systems in multiple species, from invertebrates to mammals, suggests that normalization serves as a canonical neural computation.

References[edit]

  1. ^ Carandini, M.; Heeger, D. J. (2011). "Normalization as a canonical neural computation". Nature Reviews Neuroscience 13 (1): 51–62. doi:10.1038/nrn3136. PMC 3273486. PMID 22108672.  edit
  2. ^ Heeger, D. J. (1992). "Normalization of cell responses in cat striate cortex". Visual neuroscience 9 (2): 181–197. doi:10.1017/S0952523800009640. PMID 1504027.  edit
  3. ^ Carandini, M; Heeger, DJ; Movshon, JA (1997). "Linearity and normalization in simple cells of the macaque primary visual cortex". Journal of Neuroscience 17 (21): 8621–44. PMID 9334433.  edit