Visual cortex

From Wikipedia, the free encyclopedia
(Redirected from Visual area V4)

Visual cortex
View of the brain from behind. Red = Brodmann area 17 (primary visual cortex); orange = area 18; yellow = area 19
Brain shown from the side, facing left. Above: view from outside, below: cut through the middle. Orange = Brodmann area 17 (primary visual cortex)
Details
Identifiers
Latincortex visualis
MeSHD014793
NeuroLex IDnlx_143552
FMA242644
Anatomical terms of neuroanatomy

The visual cortex of the brain is the area of the cerebral cortex that processes visual information. It is located in the occipital lobe. Sensory input originating from the eyes travels through the lateral geniculate nucleus in the thalamus and then reaches the visual cortex. The area of the visual cortex that receives the sensory input from the lateral geniculate nucleus is the primary visual cortex, also known as visual area 1 (V1), Brodmann area 17, or the striate cortex. The extrastriate areas consist of visual areas 2, 3, 4, and 5 (also known as V2, V3, V4, and V5, or Brodmann area 18 and all Brodmann area 19).[1]

Both hemispheres of the brain include a visual cortex; the visual cortex in the left hemisphere receives signals from the right visual field, and the visual cortex in the right hemisphere receives signals from the left visual field.

Introduction[edit]

The primary visual cortex (V1) is located in and around the calcarine fissure in the occipital lobe. Each hemisphere's V1 receives information directly from its ipsilateral lateral geniculate nucleus that receives signals from the contralateral visual hemifield.

Neurons in the visual cortex fire action potentials when visual stimuli appear within their receptive field. By definition, the receptive field is the region within the entire visual field that elicits an action potential. But, for any given neuron, it may respond best to a subset of stimuli within its receptive field. This property is called neuronal tuning. In the earlier visual areas, neurons have simpler tuning. For example, a neuron in V1 may fire to any vertical stimulus in its receptive field. In the higher visual areas, neurons have complex tuning. For example, in the inferior temporal cortex (IT), a neuron may fire only when a certain face appears in its receptive field.

Furthermore, the arrangement of receptive fields in V1 is retinotopic, meaning neighboring cells in V1 have receptive fields that correspond to adjacent portions of the visual field. This spatial organization allows for a systematic representation of the visual world within V1. Additionally, recent studies have delved into the role of contextual modulation in V1, where the perception of a stimulus is influenced not only by the stimulus itself but also by the surrounding context, highlighting the intricate processing capabilities of V1 in shaping our visual experiences.[2]

The visual cortex receives its blood supply primarily from the calcarine branch of the posterior cerebral artery.

The size of V1, V2, and V3 can vary three-fold, a difference that is partially inherited.[3]

Psychological model of the neural processing of visual information[edit]

The dorsal stream (green) and ventral stream (purple) are shown. They originate from primary visual cortex.

Ventral-dorsal model[edit]

V1 transmits information to two primary pathways, called the ventral stream and the dorsal stream.[4]

  • The ventral stream begins with V1, goes through visual area V2, then through visual area V4, and to the inferior temporal cortex (IT cortex). The ventral stream, sometimes called the "What Pathway", is associated with form recognition and object representation. It is also associated with storage of long-term memory.
  • The dorsal stream begins with V1, goes through Visual area V2, then to the dorsomedial area (DM/V6) and medial temporal area (MT/V5) and to the posterior parietal cortex. The dorsal stream, sometimes called the "Where Pathway" or "How Pathway", is associated with motion, representation of object locations, and control of the eyes and arms, especially when visual information is used to guide saccades or reaching.

The what vs. where account of the ventral/dorsal pathways was first described by Ungerleider and Mishkin.[5]

More recently, Goodale and Milner extended these ideas and suggested that the ventral stream is critical for visual perception whereas the dorsal stream mediates the visual control of skilled actions.[6] It has been shown that visual illusions such as the Ebbinghaus illusion distort judgements of a perceptual nature, but when the subject responds with an action, such as grasping, no distortion occurs.[7]

Work such as the one from Franz et al.[8] suggests that both the action and perception systems are equally fooled by such illusions. Other studies, however, provide strong support for the idea that skilled actions such as grasping are not affected by pictorial illusions[9][10] and suggest that the action/perception dissociation is a useful way to characterize the functional division of labor between the dorsal and ventral visual pathways in the cerebral cortex.[11]

Primary visual cortex (V1)[edit]

Micrograph showing the visual cortex (pink). The pia mater and arachnoid mater including blood vessels are seen at the top of the image. Subcortical white matter (blue) is seen at the bottom of the image. HE-LFB stain.

The primary visual cortex is the most studied visual area in the brain. In mammals, it is located in the posterior pole of the occipital lobe and is the simplest, earliest cortical visual area. It is highly specialized for processing information about static and moving objects and is excellent in pattern recognition. Moreover, V1 is characterized by a laminar organization, with six distinct layers, each playing a unique role in visual processing. Neurons in the superficial layers (II and III) are often involved in local processing and communication within the cortex, while neurons in the deeper layers (V and VI) often send information to other brain regions involved in higher-order visual processing and decision-making.

Research on V1 has also revealed the presence of orientation-selective cells, which respond preferentially to stimuli with a specific orientation, contributing to the perception of edges and contours. The discovery of these orientation-selective cells has been fundamental in shaping our understanding of how V1 processes visual information.

Furthermore, V1 exhibits plasticity, allowing it to undergo functional and structural changes in response to sensory experience. Studies have demonstrated that sensory deprivation or exposure to enriched environments can lead to alterations in the organization and responsiveness of V1 neurons, highlighting the dynamic nature of this critical visual processing hub.[12][clarification needed]

The primary visual cortex, which is defined by its function or stage in the visual system, is approximately equivalent to the striate cortex, also known as Brodmann area 17, which is defined by its anatomical location. The name "striate cortex" is derived from the line of Gennari, a distinctive stripe visible to the naked eye.

It's worth noting that Brodmann area 17 is just one subdivision of the broader Brodmann areas, which are regions of the cerebral cortex defined based on cytoarchitectural differences. In the case of the striate cortex, the line of Gennari corresponds to a band rich in myelinated nerve fibers, providing a clear marker for the primary visual processing region.

Additionally, the functional significance of the striate cortex extends beyond its role as the primary visual cortex. It serves as a crucial hub for the initial processing of visual information, such as the analysis of basic features like orientation, spatial frequency, and color. The integration of these features in the striate cortex forms the foundation for more complex visual processing carried out in higher-order visual areas. Recent neuroimaging studies have contributed to a deeper understanding of the dynamic interactions within the striate cortex and its connections with other visual and non-visual brain regions, shedding light on the intricate neural circuits that underlie visual perception.[13] that represents myelinated axons from the lateral geniculate body terminating in layer 4 of the gray matter.

The primary visual cortex is divided into six functionally distinct layers, labeled 1 to 6. Layer 4, which receives most visual input from the lateral geniculate nucleus (LGN), is further divided into 4 layers, labelled 4A, 4B, 4Cα, and 4Cβ. Sublamina 4Cα receives mostly magnocellular input from the LGN, while layer 4Cβ receives input from parvocellular pathways.[14]

The average number of neurons in the adult human primary visual cortex in each hemisphere has been estimated at 140 million.[15]

Function[edit]

The initial stage of visual processing within the cortex, known as V1, plays a fundamental role in shaping our perception of the visual world. V1 possesses a meticulously defined map, referred to as the retinotopic map, which intricately organizes spatial information from the visual field. In humans, the upper bank of the calcarine sulcus in the occipital lobe robustly responds to the lower half of the visual field, while the lower bank responds to the upper half. This retinotopic mapping conceptually represents a projection of the visual image from the retina to V1.

The importance of this retinotopic organization lies in its ability to preserve spatial relationships present in the external environment. Neighboring neurons in V1 exhibit responses to adjacent portions of the visual field, creating a systematic representation of the visual scene. This mapping extends both vertically and horizontally, ensuring the conservation of both horizontal and vertical relationships within the visual input.

Moreover, the retinotopic map demonstrates a remarkable degree of plasticity, adapting to alterations in visual experience. Studies have revealed that changes in sensory input, such as those induced by visual training or deprivation, can lead to shifts in the retinotopic map. This adaptability underscores the brain's capacity to reorganize in response to varying environmental demands, highlighting the dynamic nature of visual processing.

Beyond its spatial processing role, the retinotopic map in V1 establishes intricate connections with other visual areas, forming a network crucial for integrating diverse visual features and constructing a coherent visual percept. This dynamic mapping mechanism is indispensable for our ability to navigate and interpret the visual world effectively.

The correspondence between specific locations in V1 and the subjective visual field is exceptionally precise, even extending to map the blind spots of the retina. Evolutionarily, this correspondence is a fundamental feature found in most animals possessing a V1. In humans and other species with a fovea (cones in the retina), a substantial portion of V1 is mapped to the small central portion of the visual field—a phenomenon termed cortical magnification. This magnification reflects an increased representation and processing capacity devoted to the central visual field, essential for detailed visual acuity and high-resolution processing.

Notably, neurons in V1 have the smallest receptive field size, signifying the highest resolution, among visual cortex microscopic regions. This specialization equips V1 with the ability to capture fine details and nuances in the visual input, emphasizing its pivotal role as a critical hub in early visual processing and contributing significantly to our intricate and nuanced visual perception.[16]

In addition to its role in spatial processing, the retinotopic map in V1 is intricately connected with other visual areas, forming a network that contributes to the integration of various visual features and the construction of a coherent visual percept. This dynamic mapping mechanism is fundamental to our ability to navigate and interpret the visual world effectively.[17] The correspondence between a given location in V1 and in the subjective visual field is very precise: even the blind spots of the retina are mapped into V1. In terms of evolution, this correspondence is very basic and found in most animals that possess a V1. In humans and other animals with a fovea (cones in the retina), a large portion of V1 is mapped to the small, central portion of visual field, a phenomenon known as cortical magnification.[18] Perhaps for the purpose of accurate spatial encoding, neurons in V1 have the smallest receptive field size (that is, the highest resolution) of any visual cortex microscopic regions.

The tuning properties of V1 neurons (what the neurons respond to) differ greatly over time. Early in time (40 ms and further) individual V1 neurons have strong tuning to a small set of stimuli. That is, the neuronal responses can discriminate small changes in visual orientations, spatial frequencies and colors (as in the optical system of a camera obscura, but projected onto retinal cells of the eye, which are clustered in density and fineness).[17] Each V1 neuron propagates a signal from a retinal cell, in continuation. Furthermore, individual V1 neurons in humans and other animals with binocular vision have ocular dominance, namely tuning to one of the two eyes. In V1, and primary sensory cortex in general, neurons with similar tuning properties tend to cluster together as cortical columns. David Hubel and Torsten Wiesel proposed the classic ice-cube organization model of cortical columns for two tuning properties: ocular dominance and orientation. However, this model cannot accommodate the color, spatial frequency and many other features to which neurons are tuned[citation needed]. The exact organization of all these cortical columns within V1 remains a hot topic of current research. The mathematical modeling of this function has been compared to Gabor transforms.[citation needed]

Later in time (after 100 ms), neurons in V1 are also sensitive to the more global organisation of the scene (Lamme & Roelfsema, 2000).[19] These response properties probably stem from recurrent feedback processing (the influence of higher-tier cortical areas on lower-tier cortical areas) and lateral connections from pyramidal neurons (Hupe et al. 1998). While feedforward connections are mainly driving, feedback connections are mostly modulatory in their effects (Angelucci et al., 2003; Hupe et al., 2001). Evidence shows that feedback originating in higher-level areas such as V4, IT, or MT, with bigger and more complex receptive fields, can modify and shape V1 responses, accounting for contextual or extra-classical receptive field effects (Guo et al., 2007; Huang et al., 2007; Sillito et al., 2006).

The visual information relayed to V1 is not coded in terms of spatial (or optical) imagery[citation needed] but rather are better described as edge detection.[20] As an example, for an image comprising half side black and half side white, the dividing line between black and white has strongest local contrast (that is, edge detection) and is encoded, while few neurons code the brightness information (black or white per se). As information is further relayed to subsequent visual areas, it is coded as increasingly non-local frequency/phase signals. Note that, at these early stages of cortical visual processing, spatial location of visual information is well preserved amid the local contrast encoding (edge detection).

A theoretical explanation of the computational function of the simple cells in the primary visual cortex has been presented in.[21][22][23] It is described how receptive field shapes similar to those found by the biological receptive field measurements performed by DeAngelis et al.[24][25] can be derived as a consequence of structural properties of the environment in combination with internal consistency requirements to guarantee consistent image representations over multiple spatial and temporal scales. It is also described how the characteristic receptive field shapes, tuned to different scales, orientations and directions in image space, allow the visual system to compute invariant responses under natural image transformations at higher levels in the visual hierarchy.[26][22][23]

In primates, one role of V1 might be to create a saliency map (highlights what is important) from visual inputs to guide the shifts of attention known as gaze shifts.[27]

According to the V1 Saliency Hypothesis, V1 does this by transforming visual inputs to neural firing rates from millions of neurons, such that the visual location signaled by the highest firing neuron is the most salient location to attract gaze shift. V1's outputs are received by the superior colliculus (in the mid-brain), among other locations, which reads out the V1 activities to guide gaze shifts.

Differences in size of V1 also seem to have an effect on the perception of illusions.[28]

V2[edit]

Colour centre
The difference V-regions. More images in Colour centre
Identifiers
MeSHD014793
NeuroLex IDnlx_143552
FMA242644
Anatomical terminology

Visual area V2, or secondary visual cortex, also called prestriate cortex,[29] receives strong feedforward connections from V1 (direct and via the pulvinar) and sends robust connections to V3, V4, and V5. Additionally, it plays a crucial role in the integration and processing of visual information.

The feedforward connections from V1 to V2 contribute to the hierarchical processing of visual stimuli. V2 neurons build upon the basic features detected in V1, extracting more complex visual attributes such as texture, depth, and color. This hierarchical processing is essential for the construction of a more nuanced and detailed representation of the visual scene.

Furthermore, the reciprocal feedback connections from V2 to V1 play a significant role in modulating the activity of V1 neurons. This feedback loop is thought to be involved in processes such as attention, perceptual grouping, and figure-ground segregation. The dynamic interplay between V1 and V2 highlights the intricate nature of information processing within the visual system.

Moreover, V2's connections with subsequent visual areas, including V3, V4, and V5, contribute to the formation of a distributed network for visual processing. These connections enable the integration of different visual features, such as motion and form, across multiple stages of the visual hierarchy.[30]


In terms of anatomy, V2 is split into four quadrants, a dorsal and ventral representation in the left and the right hemispheres. Together, these four regions provide a complete map of the visual world. V2 has many properties in common with V1: Cells are tuned to simple properties such as orientation, spatial frequency, and color. The responses of many V2 neurons are also modulated by more complex properties, such as the orientation of illusory contours,[31][32] binocular disparity,[33] and whether the stimulus is part of the figure or the ground.[34][35] Recent research has shown that V2 cells show a small amount of attentional modulation (more than V1, less than V4), are tuned for moderately complex patterns, and may be driven by multiple orientations at different subregions within a single receptive field.

It is argued that the entire ventral visual-to-hippocampal stream is important for visual memory.[36] This theory, unlike the dominant one, predicts that object-recognition memory (ORM) alterations could result from the manipulation in V2, an area that is highly interconnected within the ventral stream of visual cortices. In the monkey brain, this area receives strong feedforward connections from the primary visual cortex (V1) and sends strong projections to other secondary visual cortices (V3, V4, and V5).[37][38] Most of the neurons of this area in primates are tuned to simple visual characteristics such as orientation, spatial frequency, size, color, and shape.[32][39][40] Anatomical studies implicate layer 3 of area V2 in visual-information processing. In contrast to layer 3, layer 6 of the visual cortex is composed of many types of neurons, and their response to visual stimuli is more complex.

In one study, the Layer 6 cells of the V2 cortex were found to play a very important role in the storage of Object Recognition Memory as well as the conversion of short-term object memories into long-term memories.[41]

Third visual cortex, including area V3[edit]

A visual field map of the primary visual cortex and the numerous extrastriate areas. More images in Colour centre

The term third visual complex refers to the region of cortex located immediately in front of V2, which includes the region named visual area V3 in humans. The "complex" nomenclature is justified by the fact that some controversy still exists regarding the exact extent of area V3, with some researchers proposing that the cortex located in front of V2 may include two or three functional subdivisions. For example, David Van Essen and others (1986) have proposed the existence of a "dorsal V3" in the upper part of the cerebral hemisphere, which is distinct from the "ventral V3" (or ventral posterior area, VP) located in the lower part of the brain. Dorsal and ventral V3 have distinct connections with other parts of the brain, appear different in sections stained with a variety of methods, and contain neurons that respond to different combinations of visual stimulus (for example, colour-selective neurons are more common in the ventral V3). Additional subdivisions, including V3A and V3B have also been reported in humans. These subdivisions are located near dorsal V3, but do not adjoin V2.

Dorsal V3 is normally considered to be part of the dorsal stream, receiving inputs from V2 and from the primary visual area and projecting to the posterior parietal cortex. It may be anatomically located in Brodmann area 19. Braddick using fMRI has suggested that area V3/V3A may play a role in the processing of global motion[42] Other studies prefer to consider dorsal V3 as part of a larger area, named the dorsomedial area (DM), which contains a representation of the entire visual field. Neurons in area DM respond to coherent motion of large patterns covering extensive portions of the visual field (Lui and collaborators, 2006).

Ventral V3 (VP), has much weaker connections from the primary visual area, and stronger connections with the inferior temporal cortex. While earlier studies proposed that VP contained a representation of only the upper part of the visual field (above the point of fixation), more recent work indicates that this area is more extensive than previously appreciated, and like other visual areas it may contain a complete visual representation. The revised, more extensive VP is referred to as the ventrolateral posterior area (VLP) by Rosa and Tweedale.[43]

V4[edit]

The lingual gyrus is the hypothetical location of V4 in macaque monkeys. In humans, this area is called hV4.
The fusiform gyrus is the hypothetical location of V4α, a secondary area for colour processing. More: Colour centre

Visual area V4 is one of the visual areas in the extrastriate visual cortex. In macaques, it is located anterior to V2 and posterior to posterior inferotemporal area (PIT). It comprises at least four regions (left and right V4d, left and right V4v), and some groups report that it contains rostral and caudal subdivisions as well. It is unknown whether the human V4 is as expansive as that of the macaque homologue which is a subject of debate.[44]

V4 is the third cortical area in the ventral stream, receiving strong feedforward input from V2 and sending strong connections to the PIT. It also receives direct input from V1, especially for central space. In addition, it has weaker connections to V5 and dorsal prelunate gyrus (DP).

V4 is the first area in the ventral stream to show strong attentional modulation. Most studies indicate that selective attention can change firing rates in V4 by about 20%. A seminal paper by Moran and Desimone characterizing these effects was the first paper to find attention effects anywhere in the visual cortex.[45]

Like V2, V4 is tuned for orientation, spatial frequency, and color. Unlike V2, V4 is tuned for object features of intermediate complexity, like simple geometric shapes, although no one has developed a full parametric description of the tuning space for V4. Visual area V4 is not tuned for complex objects such as faces, as areas in the inferotemporal cortex are.

The firing properties of V4 were first described by Semir Zeki in the late 1970s, who also named the area. Before that, V4 was known by its anatomical description, the prelunate gyrus. Originally, Zeki argued that the purpose of V4 was to process color information. Work in the early 1980s proved that V4 was as directly involved in form recognition as earlier cortical areas.[citation needed] This research supported the two-streams hypothesis, first presented by Ungerleider and Mishkin in 1982.

Recent work has shown that V4 exhibits long-term plasticity,[46] encodes stimulus salience, is gated by signals coming from the frontal eye fields,[47] and shows changes in the spatial profile of its receptive fields with attention.[citation needed] In addition, it has recently been shown that activation of area V4 in humans (area V4h) is observed during the perception and retention of the color of objects, but not their shape.[48][49]

Middle temporal visual area (V5)[edit]

The middle temporal visual area (MT or V5) is a region of extrastriate visual cortex. In several species of both New World monkeys and Old World monkeys the MT area contains a high concentration of direction-selective neurons.[50] The MT in primates is thought to play a major role in the perception of motion, the integration of local motion signals into global percepts, and the guidance of some eye movements.[50]

Connections[edit]

MT is connected to a wide array of cortical and subcortical brain areas. Its input comes from visual cortical areas V1, V2 and dorsal V3 (dorsomedial area),[51][52] the koniocellular regions of the LGN,[53] and the inferior pulvinar.[54] The pattern of projections to MT changes somewhat between the representations of the foveal and peripheral visual fields, with the latter receiving inputs from areas located in the midline cortex and retrosplenial region.[55]

A standard view is that V1 provides the "most important" input to MT.[50] Nonetheless, several studies have demonstrated that neurons in MT are capable of responding to visual information, often in a direction-selective manner, even after V1 has been destroyed or inactivated.[56] Moreover, research by Semir Zeki and collaborators has suggested that certain types of visual information may reach MT before it even reaches V1.

MT sends its major output to areas located in the cortex immediately surrounding it, including areas FST, MST, and V4t (middle temporal crescent). Other projections of MT target the eye movement-related areas of the frontal and parietal lobes (frontal eye field and lateral intraparietal area).

Function[edit]

The first studies of the electrophysiological properties of neurons in MT showed that a large portion of the cells are tuned to the speed and direction of moving visual stimuli.[57][58]

Lesion studies have also supported the role of MT in motion perception and eye movements.[59] Neuropsychological studies of a patient unable to see motion, seeing the world in a series of static 'frames' instead, suggested that V5 in the primate is homologous to MT in the human.[60][61]

However, since neurons in V1 are also tuned to the direction and speed of motion, these early results left open the question of precisely what MT could do that V1 could not. Much work has been carried out on this region, as it appears to integrate local visual motion signals into the global motion of complex objects.[62] For example, lesion to the V5 leads to deficits in perceiving motion and processing of complex stimuli. It contains many neurons selective for the motion of complex visual features (line ends, corners). Microstimulation of a neuron located in the V5 affects the perception of motion. For example, if one finds a neuron with preference for upward motion in a monkey's V5 and stimulates it with an electrode, then the monkey becomes more likely to report 'upward' motion when presented with stimuli containing 'left' and 'right' as well as 'upward' components.[63]

There is still much controversy over the exact form of the computations carried out in area MT[64] and some research suggests that feature motion is in fact already available at lower levels of the visual system such as V1.[65][66]

Functional organization[edit]

MT was shown to be organized in direction columns.[67] DeAngelis argued that MT neurons were also organized based on their tuning for binocular disparity.[68]

V6[edit]

The dorsomedial area (DM) also known as V6, appears to respond to visual stimuli associated with self-motion[69] and wide-field stimulation.[70] V6 is a subdivision of the visual cortex of primates first described by John Allman and Jon Kaas in 1975.[71] V6 is located in the dorsal part of the extrastriate cortex, near the deep groove through the centre of the brain (medial longitudinal fissure), and typically also includes portions of the medial cortex, such as the parieto-occipital sulcus (POS).[70]: 7970  DM contains a topographically organized representation of the entire field of vision.[70]: 7970 

There are similarities between the visual area V5 and V6 of the common marmoset. Both areas receive direct connections from the primary visual cortex.[70]: 7971  And both have a high myelin content, a characteristic that is usually present in brain structures involved in fast transmission of information.[72]

For many years, it was considered that DM only existed in New World monkeys. However, more recent research has suggested that DM also exists in Old World monkeys and humans.[70]: 7972  V6 is also sometimes referred to as the parieto-occipital area (PO), although the correspondence is not exact.[73][74]

Properties[edit]

Neurons in area DM/V6 of night monkeys and common marmosets have unique response properties, including an extremely sharp selectivity for the orientation of visual contours, and preference for long, uninterrupted lines covering large parts of the visual field.[75][76]

However, in comparison with area MT, a much smaller proportion of DM cells shows selectivity for the direction of motion of visual patterns.[77] Another notable difference with area MT is that cells in DM are attuned to low spatial frequency components of an image, and respond poorly to the motion of textured patterns such as a field of random dots.[77] These response properties suggest that DM and MT may work in parallel, with the former analyzing self-motion relative to the environment, and the latter analyzing the motion of individual objects relative to the background.[77]

Recently, an area responsive to wide-angle flow fields has been identified in the human and is thought to be a homologue of macaque area V6.[78]

Pathways[edit]

The connections and response properties of cells in DM/ V6 suggest that this area is a key node in a subset of the "dorsal stream", referred to by some as the "dorsomedial pathway".[79] This pathway is likely to be important for the control of skeletomotor activity, including postural reactions and reaching movements towards objects[74] The main 'feedforward' connection of DM is to the cortex immediately rostral to it, in the interface between the occipital and parietal lobes (V6A).[79] This region has, in turn, relatively direct connections with the regions of the frontal lobe that control arm movements, including the premotor cortex.[79][80]

See also[edit]

References[edit]

  1. ^ Mather, George. "The Visual Cortex". School of Life Sciences: University of Sussex. Retrieved 2017-03-06.
  2. ^ Fişek, Mehmet et al. “Cortico-cortical feedback engages active dendrites in visual cortex.” Nature vol. 617,7962 (2023): 769-776. doi:10.1038/s41586-023-06007-6
  3. ^ Benson, Noah C.; Yoon, Jennifer M. D.; Forenzo, Dylan; Engel, Stephen A.; Kay, Kendrick N.; Winawer, Jonathan (30 September 2022). "Variability of the Surface Area of the V1, V2, and V3 Maps in a Large Sample of Human Observers". The Journal of Neuroscience. 42 (46): 8629–8646. doi:10.1523/jneurosci.0690-21.2022. ISSN 0270-6474. PMC 9671582. PMID 36180226.
  4. ^ Braz, José; Pettré, Julien; Richard, Paul; Kerren, Andreas; Linsen, Lars; Battiato, Sebastiano; Imai, Francisco (11 February 2016). "Algorithmic Optimnizations in the HMAX Model Targeted for Efficient Object Recognition". In Bitar, Ahmad W.; Mansour, Mohamad M.; Chehab, Ali (eds.). Computer Vision, Imaging and Computer Graphics Theory and Applications. Berlin, Germany: Springer. p. 377. ISBN 978-3-319-29971-6.
  5. ^ Ungerleider LG, Mishkin M (1982). "Two Cortical Visual Systems". In Ingle DJ, Goodale MA, Mansfield RJ (eds.). Analysis of Visual Behavior. Boston: MIT Press. pp. 549–586. ISBN 978-0-262-09022-3.
  6. ^ Goodale MA, Milner AD (1992). "Separate pathways for perception and action". Trends in Neurosciences. 15 (1): 20–25. CiteSeerX 10.1.1.207.6873. doi:10.1016/0166-2236(92)90344-8. PMID 1374953. S2CID 793980.
  7. ^ Aglioti S, DeSouza JF, Goodale MA (1995). "Size-contrast illusions deceive the eye but not the hand". Current Biology. 5 (6): 679–85. doi:10.1016/S0960-9822(95)00133-3. PMID 7552179. S2CID 206111613.
  8. ^ Franz VH, Scharnowski F, Gegenfurtner KR (2005). "Illusion effects on grasping are temporally constant not dynamic". Journal of Experimental Psychology: Human Perception and Performance. 31 (6): 1359–78. doi:10.1037/0096-1523.31.6.1359. PMID 16366795.
  9. ^ Ganel T, Goodale MA (2003). "Visual control of action but not perception requires analytical processing of object shape". Nature. 426 (6967): 664–7. Bibcode:2003Natur.426..664G. doi:10.1038/nature02156. PMID 14668865. S2CID 4314969.
  10. ^ Ganel T, Tanzer M, Goodale MA (2008). "A double dissociation between action and perception in the context of visual illusions: opposite effects of real and illusory size". Psychological Science. 19 (3): 221–5. doi:10.1111/j.1467-9280.2008.02071.x. PMID 18315792. S2CID 15679825.
  11. ^ Goodale MA (2011). "Transforming vision into action". Vision Research. 51 (14): 1567–87. doi:10.1016/j.visres.2010.07.027. PMID 20691202.
  12. ^ Coen, Philip et al. “Mouse frontal cortex mediates additive multisensory decisions.” Neuron vol. 111,15 (2023): 2432-2447.e13. doi:10.1016/j.neuron.2023.05.008
  13. ^ Glickstein M, Rizzolatti G (1 December 1984). "Francesco Gennari and the structure of the cerebral cortex". Trends in Neurosciences. 7 (12): 464–467. doi:10.1016/S0166-2236(84)80255-6. S2CID 53168851.
  14. ^ Hubel DH, Wiesel TN (1972). "Laminar and columnar distribution of geniculo-cortical fibers in the macaque monkey". Journal of Comparative Neurology. 146 (4): 421–450. doi:10.1002/cne.901460402. PMID 4117368. S2CID 6478458.
  15. ^ Leuba G, Kraftsik R (1994). "Changes in volume, surface estimate, three-dimensional shape and total number of neurons of the human primary visual cortex from midgestation until old age". Anatomy and Embryology. 190 (4): 351–366. doi:10.1007/BF00187293. PMID 7840422. S2CID 28320951.
  16. ^ Wu, Fangfang et al. “A Comprehensive Overview of the Role of Visual Cortex Malfunction in Depressive Disorders: Opportunities and Challenges.” Neuroscience bulletin vol. 39,9 (2023): 1426-1438. doi:10.1007/s12264-023-01052-7
  17. ^ a b Johannes Kepler (1604) Paralipomena to Witelo whereby The Optical Part of Astronomy is Treated (Ad Vitellionem Paralipomena, quibus astronomiae pars optica traditvr, 1604), as cited by A.Mark Smith (2015) From Sight to Light. Kepler modeled the eye as a water-filled glass sphere, and discovered that each point of the scene taken in by the eye projects onto a point on the back of the eye (the retina).
  18. ^ Barghout, Lauren (1999). On the Differences Between Peripheral and Foveal Pattern Masking (Masters). Berkeley, California: University of California, Berkeley.
  19. ^ Barghout, Lauren (2003). Vision: How Global Perceptual Context Changes Local Contrast Processing (Ph.D. Dissertation). Updated to include computer vision techniques. Scholar's Press. ISBN 978-3-639-70962-9.
  20. ^ Kesserwani, Hassan (28 October 2020). "The Biophysics of Visual Edge Detection: A Review of Basic Principles". Cureus. 12 (10): e11218. doi:10.7759/cureus.11218. PMC 7706146. PMID 33269147.
  21. ^ Lindeberg T (2013). "A computational theory of visual receptive fields". Biological Cybernetics. 107 (6): 589–635. doi:10.1007/s00422-013-0569-z. PMC 3840297. PMID 24197240.
  22. ^ a b Lindeberg T (2021). "Normative theory of visual receptive fields". Heliyon. 7 (1): e05897:1–20. Bibcode:2021Heliy...705897L. doi:10.1016/j.heliyon.2021.e05897. PMC 7820928. PMID 33521348.
  23. ^ a b Lindeberg T (2023). "Covariance properties under natural image transformations for the generalized Gaussian derivative model for visual receptive fields". Frontiers in Computational Neuroscience. 17. 1189949. doi:10.3389/fncom.2023.1189949. PMC 10311448. PMID 37398936.
  24. ^ DeAngelis GC, Ohzawa I, Freeman RD (1995). "Receptive field dynamics in the central visual pathways". Trends in Neurosciences. 18 (10): 451–457. doi:10.1016/0166-2236(95)94496-r. PMID 8545912. S2CID 12827601.
  25. ^ DeAngelis GC, Anzai A (2004). "A modern view of the classical receptive field: linear and non-linear spatio-temporal processing by V1 neurons". In Chalupa LM, Werner JS (eds.). The Visual Neurosciences. Vol. 1. Cambridge: MIT Press. pp. 704–719.
  26. ^ Lindeberg T (2013). "Invariance of visual operations at the level of receptive fields". PLOS ONE. 8 (7): e66990. arXiv:1210.0754. Bibcode:2013PLoSO...866990L. doi:10.1371/journal.pone.0066990. PMC 3716821. PMID 23894283.
  27. ^ Zhaoping, L. (2014). "The V1 hypothesis—creating a bottom-up saliency map for pre-attentive selection and segmentation". Understanding Vision: Theory, Models, and Data. Oxford University Press. pp. 189–314. doi:10.1093/acprof:oso/9780199564668.003.0005. ISBN 9780199564668.
  28. ^ Schwarzkopf DS (2011). "The surface area of human V1 predicts the subjective experience of object size". Nature Neuroscience. 14 (1): 28–30. doi:10.1038/nn.2706. PMC 3012031. PMID 21131954.
  29. ^ Gazzaniga; Ivry; Mangun (2002). Cognitive neuroscience.[full citation needed]
  30. ^ Taylor, Katherine. and Jeanette Rodriguez. “Visual Discrimination.” StatPearls, StatPearls Publishing, 19 September 2022
  31. ^ von der Heydt R, Peterhans E, Baumgartner G (1984). "Illusory contours and cortical neuron responses". Science. 224 (4654): 1260–62. Bibcode:1984Sci...224.1260V. doi:10.1126/science.6539501. PMID 6539501.
  32. ^ a b Anzai A, Peng X, Van Essen DC (2007). "Neurons in monkey visual area V2 encode combinations of orientations". Nature Neuroscience. 10 (10): 1313–21. doi:10.1038/nn1975. PMID 17873872. S2CID 6796448.
  33. ^ von der Heydt R, Zhou H, Friedman HS (2000). "Representation of stereoscopic edges in monkey visual cortex". Vision Research. 40 (15): 1955–67. doi:10.1016/s0042-6989(00)00044-4. PMID 10828464. S2CID 10269181.
  34. ^ Qiu FT, von der Heydt R (2005). "Figure and ground in the visual cortex: V2 combines stereoscopic cues with Gestalt rules". Neuron. 47 (1): 155–66. doi:10.1016/j.neuron.2005.05.028. PMC 1564069. PMID 15996555.
  35. ^ Maruko I, et al. (2008). "Postnatal Development of Disparity Sensitivity in Visual Area 2 (V2) of Macaque Monkeys". Journal of Neurophysiology. 100 (5): 2486–2495. doi:10.1152/jn.90397.2008. PMC 2585398. PMID 18753321.
  36. ^ Bussey TJ, Saksida LM (2007). "Memory, perception, and the ventral visual-perirhinal-hippocampal stream: thinking outside of the boxes". Hippocampus. 17 (9): 898–908. doi:10.1002/hipo.20320. PMID 17636546. S2CID 13271331.
  37. ^ Stepniewska I, Kaas JH (1996). "Topographic patterns of V2 cortical connections in macaque monkeys". The Journal of Comparative Neurology. 371 (1): 129–152. doi:10.1002/(SICI)1096-9861(19960715)371:1<129::AID-CNE8>3.0.CO;2-5. PMID 8835723. S2CID 8500842.
  38. ^ Gattas R, Sousa AP, Mishkin M, Ungerleider LG (1997). "Cortical projections of area V2 in the macaque". Cerebral Cortex. 7 (2): 110–129. doi:10.1093/cercor/7.2.110. PMID 9087820.
  39. ^ Hegdé, Jay; Van Essen, D. C. (2000). "Selectivity for Complex Shapes in Primate Visual Area V2". The Journal of Neuroscience. 20 (5): RC61. doi:10.1523/JNEUROSCI.20-05-j0001.2000. PMC 6772904. PMID 10684908.
  40. ^ Hegdé, Jay; Van Essen, D. C. (2004). "Temporal dynamics of shape analysis in Macaque visual area V2". Journal of Neurophysiology. 92 (5): 3030–3042. doi:10.1152/jn.00822.2003. PMID 15201315. S2CID 6428310.
  41. ^ López-Aranda, Manuel F.; et al. (2009). "Role of Layer 6 of V2 Visual Cortex in Object Recognition Memory". Science. 325 (5936): 87–89. Bibcode:2009Sci...325...87L. doi:10.1126/science.1170869. PMID 19574389. S2CID 23990759.
  42. ^ Braddick OJ, O'Brien JM, et al. (2001). "Brain areas sensitive to coherent visual motion". Perception. 30 (1): 61–72. doi:10.1068/p3048. PMID 11257978. S2CID 24081674.
  43. ^ Rosa MG, Tweedale R (2000). "Visual areas in lateral and ventral extrastriate cortices of the marmoset monkey". Journal of Comparative Neurology. 422 (4): 621–51. doi:10.1002/1096-9861(20000710)422:4<621::AID-CNE10>3.0.CO;2-E. PMID 10861530. S2CID 25982910.
  44. ^ Goddard, Erin; Mannion, Damien J.; McDonald, J. Scott; Solomon, Samuel G.; Clifford, Colin W. G. (April 2011). "Color responsiveness argues against a dorsal component of human V4". Journal of Vision. 11 (4): 3. doi:10.1167/11.4.3. PMID 21467155.
  45. ^ Moran J, Desimone R (1985). "Selective Attention Gates Visual Processing in the Extrastriate Cortex". Science. 229 (4715): 782–4. Bibcode:1985Sci...229..782M. CiteSeerX 10.1.1.308.6038. doi:10.1126/science.4023713. PMID 4023713.
  46. ^ Schmid MC, Schmiedt JT, Peters AJ, Saunders RC, Maier A, Leopold DA (27 November 2013). "Motion-Sensitive Responses in Visual Area V4 in the Absence of Primary Visual Cortex". Journal of Neuroscience. 33 (48): 18740–18745. doi:10.1523/JNEUROSCI.3923-13.2013. PMC 3841445. PMID 24285880.
  47. ^ Moore, Tirin; Armstrong, Katherine M. (2003). "Selective gating of visual signals by microstimulation of frontal cortex". Nature. 421 (6921): 370–373. Bibcode:2003Natur.421..370M. doi:10.1038/nature01341. PMID 12540901. S2CID 4405385.
  48. ^ Kozlovskiy, Stanislav; Rogachev, Anton (2021). "How Areas of Ventral Visual Stream Interact When We Memorize Color and Shape Information". In Velichkovsky, Boris M.; Balaban, Pavel M.; Ushakov, Vadim L. (eds.). Advances in Cognitive Research, Artificial Intelligence and Neuroinformatics. Intercognsci 2020. Advances in Intelligent Systems and Computing. Vol. 1358. Cham: Springer International Publishing. pp. 95–100. doi:10.1007/978-3-030-71637-0_10. ISBN 978-3-030-71636-3.
  49. ^ Kozlovskiy, Stanislav; Rogachev, Anton (October 2021). "Ventral Visual Cortex Areas and Processing of Color and Shape in Visual Working Memory". International Journal of Psychophysiology. 168 (Supplement): S155–S156. doi:10.1016/j.ijpsycho.2021.07.437. S2CID 239648133.
  50. ^ a b c Born R, Bradley D (2005). "Structure and function of visual area MT". Annual Review of Neuroscience. 28: 157–89. doi:10.1146/annurev.neuro.26.041002.131052. PMID 16022593.
  51. ^ Felleman D, Van Essen D (1991). "Distributed hierarchical processing in the primate cerebral cortex". Cerebral Cortex. 1 (1): 1–47. doi:10.1093/cercor/1.1.1-a. PMID 1822724.
  52. ^ Ungerleider L, Desimone R (1986). "Cortical connections of visual area MT in the macaque". Journal of Comparative Neurology. 248 (2): 190–222. doi:10.1002/cne.902480204. PMID 3722458. S2CID 1876622.
  53. ^ Sincich L, Park K, Wohlgemuth M, Horton J (2004). "Bypassing V1: a direct geniculate input to area MT". Nature Neuroscience. 7 (10): 1123–8. doi:10.1038/nn1318. PMID 15378066. S2CID 13419990.
  54. ^ Warner CE, Goldshmit Y, Bourne JA (2010). "Retinal afferents synapse with relay cells targeting the middle temporal area in the pulvinar and lateral geniculate nuclei". Frontiers in Neuroanatomy. 4: 8. doi:10.3389/neuro.05.008.2010. PMC 2826187. PMID 20179789.
  55. ^ Palmer SM, Rosa MG (2006). "A distinct anatomical network of cortical areas for analysis of motion in far peripheral vision". European Journal of Neuroscience. 24 (8): 2389–405. doi:10.1111/j.1460-9568.2006.05113.x. PMID 17042793. S2CID 21562682.
  56. ^ Rodman HR, Gross CG, Albright TD (1989). "Afferent basis of visual response properties in area MT of the macaque. I. Effects of striate cortex removal". Journal of Neuroscience. 9 (6): 2033–50. doi:10.1523/JNEUROSCI.09-06-02033.1989. PMC 6569731. PMID 2723765.
  57. ^ Dubner R, Zeki S (1971). "Response properties and receptive fields of cells in an anatomically defined region of the superior temporal sulcus in the monkey". Brain Research. 35 (2): 528–32. doi:10.1016/0006-8993(71)90494-X. PMID 5002708..
  58. ^ Maunsell J, Van Essen D (1983). "Functional properties of neurons in middle temporal visual area of the macaque monkey. I. Selectivity for stimulus direction, speed, and orientation". Journal of Neurophysiology. 49 (5): 1127–47. doi:10.1152/jn.1983.49.5.1127. PMID 6864242. S2CID 8708245.
  59. ^ Dursteler MR, Wurtz RH, Newsome WT (1987). "Directional pursuit deficits following lesions of the foveal representation within the superior temporal sulcus of the macaque monkey". Journal of Neurophysiology. 57 (5): 1262–87. CiteSeerX 10.1.1.375.8659. doi:10.1152/jn.1987.57.5.1262. PMID 3585468.
  60. ^ Hess RH, Baker CL, Zihl J (1989). "The 'motion-blind' patient: low-level spatial and temporal filters". Journal of Neuroscience. 9 (5): 1628–40. doi:10.1523/JNEUROSCI.09-05-01628.1989. PMC 6569833. PMID 2723744.
  61. ^ Baker CL Jr, Hess RF, Zihl J (1991). "Residual motion perception in a 'motion-blind' patient, assessed with limited-lifetime random dot stimuli". Journal of Neuroscience. 11 (2): 454–61. doi:10.1523/JNEUROSCI.11-02-00454.1991. PMC 6575225. PMID 1992012.
  62. ^ Movshon, J.A., Adelson, E.H., Gizzi, M.S., & Newsome, W.T. (1985). The analysis of moving visual patterns. In: C. Chagas, R. Gattass, & C. Gross (Eds.), Pattern recognition mechanisms (pp. 117–151), Rome: Vatican Press.
  63. ^ Britten KH, van Wezel RJ (1998). "Electrical microstimulation of cortical area MST biases heading perception in monkeys". Nature Neuroscience. 1 (1): 59–63. doi:10.1038/259. PMID 10195110. S2CID 52820462.
  64. ^ Wilson, H.R.; Ferrera, V.P.; Yo, C. (1992). "A psychophysically motivated model for two-dimensional motion perception". Visual Neuroscience. 9 (1): 79–97. doi:10.1017/s0952523800006386. PMID 1633129. S2CID 45196189.
  65. ^ Tinsley CJ, Webb BS, Barraclough NE, Vincent CJ, Parker A, Derrington AM (2003). "The nature of V1 neural responses to 2D moving patterns depends on receptive-field structure in the marmoset monkey". Journal of Neurophysiology. 90 (2): 930–7. doi:10.1152/jn.00708.2002. PMID 12711710. S2CID 540146.
  66. ^ Pack CC, Born RT, Livingstone MS (2003). "Two-dimensional substructure of stereo and motion interactions in macaque visual cortex". Neuron. 37 (3): 525–35. doi:10.1016/s0896-6273(02)01187-x. PMID 12575958.
  67. ^ Albright T (1984). "Direction and orientation selectivity of neurons in visual area MT of the macaque". Journal of Neurophysiology. 52 (6): 1106–30. doi:10.1152/jn.1984.52.6.1106. PMID 6520628.
  68. ^ DeAngelis G, Newsome W (1999). "Organization of disparity-selective neurons in macaque area MT". Journal of Neuroscience. 19 (4): 1398–1415. doi:10.1523/JNEUROSCI.19-04-01398.1999. PMC 6786027. PMID 9952417.
  69. ^ Cardin V, Smith AT (2010). "Sensitivity of human visual and vestibular cortical regions to stereoscopic depth gradients associated with self-motion". Cerebral Cortex. 20 (8): 1964–73. doi:10.1093/cercor/bhp268. PMC 2901022. PMID 20034998.
  70. ^ a b c d e Pitzalis S, et al. (2006). "Wide-Field Retinotopy Defines Human Cortical Visual Area V6". The Journal of Neuroscience. 26 (30): 7962–73. doi:10.1523/jneurosci.0178-06.2006. PMC 6674231. PMID 16870741.
  71. ^ Allman JM, Kaas JH (1975). "The dorsomedial cortical visual area: a third tier area in the occipital lobe of the owl monkey (Aotus trivirgatus)". Brain Research. 100 (3): 473–487. doi:10.1016/0006-8993(75)90153-5. PMID 811327. S2CID 22980932.
  72. ^ Pitzalis, S.; Fattori, P.; Galletti, C. (2013). "The functional role of the medial motion area V6". Frontiers in Behavioral Neuroscience. 6: 91. doi:10.3389/fnbeh.2012.00091. PMC 3546310. PMID 23335889.
  73. ^ Galletti C, et al. (2005). "The relationship between V6 and PO in macaque extrastriate cortex" (PDF). European Journal of Neuroscience. 21 (4): 959–970. CiteSeerX 10.1.1.508.5602. doi:10.1111/j.1460-9568.2005.03911.x. PMID 15787702. S2CID 15020868. Archived from the original (PDF) on 2017-08-08. Retrieved 2018-09-14.
  74. ^ a b Galletti C, et al. (2003). "Role of the medial parieto-occipital cortex in the control of reaching and grasping movements". Experimental Brain Research. 153 (2): 158–170. doi:10.1007/s00221-003-1589-z. PMID 14517595. S2CID 1821863.
  75. ^ Baker JF, et al. (1981). "Visual response properties of neurons in four extrastriate visual areas of the owl monkey (Aotus trivirgatus): a quantitative comparison of medial, dorsomedial, dorsolateral, and middle temporal areas". Journal of Neurophysiology. 45 (3): 397–416. doi:10.1152/jn.1981.45.3.397. PMID 7218008. S2CID 9865958.
  76. ^ Lui LL, et al. (2006). "Functional response properties of neurons in the dorsomedial visual area of New World monkeys (Callithrix jacchus)". Cerebral Cortex. 16 (2): 162–177. doi:10.1093/cercor/bhi094. PMID 15858163.
  77. ^ a b c Denis Ducreux. "Calcarine (Visual) Cortex". Connectopedia Knowledge Database. Archived from the original on 2018-01-20. Retrieved 2018-01-25.{{cite web}}: CS1 maint: unfit URL (link)
  78. ^ Pitzalis S, Sereno MI, Committeri G, Fattori P, Galati G, Patria F, Galletti C (2010). "Human v6: The medial motion area". Cerebral Cortex. 20 (2): 411–424. doi:10.1093/cercor/bhp112. PMC 2803738. PMID 19502476.
  79. ^ a b c Fattori P, Raos V, Breveglieri R, Bosco A, Marzocchi N, Galletti C (2010). "The Dorsomedial Pathway is Not Just for Reaching: Grasping Neurons in the Medial Parieto-Occipital Cortex of the Macaque Monkey". Journal of Neuroscience. 30 (1): 342–349. doi:10.1523/JNEUROSCI.3800-09.2010. PMC 6632536. PMID 20053915.
  80. ^ Monaco, Simona; Malfatti, Giulia; Zendron, Alessandro; Pellencin, Elisa; Turella, Luca (2019). "Predictive coding of action intentions in dorsal and ventral visual stream is based on visual anticipations, memory-based information and motor preparation". Brain Structure and Function. 224 (9): 3291–3308. doi:10.1007/s00429-019-01970-1. PMID 31673774. S2CID 207811473.

External links[edit]