Language processing in the brain

From Wikipedia, the free encyclopedia
  (Redirected from Language processing)
Jump to navigation Jump to search
Dual stream connectivity between the auditory cortex and frontal lobe of monkeys and humans. Top: The auditory cortex of the monkey (left) and human (right) is schematically depicted on the supratemporal plane and observed from above (with the parieto- frontal operculi removed). Bottom: The brain of the monkey (left) and human (right) is schematically depicted and displayed from the side. Orange frames mark the region of the auditory cortex, which is displayed in the top sub-figures. Top and Bottom: Blue colors mark regions affiliated with the ADS, and red colors mark regions affiliated with the AVS (dark red and blue regions mark the primary auditory fields). Abbreviations: AMYG-amygdala, HG-Heschl’s gyrus, FEF-frontal eye field, IFG-inferior frontal gyrus, INS-insula, IPS-intra parietal sulcus, MTG-middle temporal gyrus, PC-pitch center, PMd-dorsal premotor cortex, PP-planum polare, PT-planum temporale, TP-temporal pole, Spt-sylvian parietal-temporal, pSTG/mSTG/aSTG-posterior/middle/anterior superior temporal gyrus, CL/ ML/AL/RTL-caudo-/middle-/antero-/rostrotemporal-lateral belt area, CPB/RPB-caudal/rostral parabelt fields. Used with permission from Poliva O. From where to what: a neuroanatomically based evolutionary model of the emergence of speech in humans.CC-BY icon.svg Material was copied from this source, which is available under a Creative Commons Attribution 4.0 International License.

Language processing refers to the way humans use words to communicate ideas and feelings, and how such communications are processed and understood. Language processing is considered to be an uniquely human ability that is not produced with the same grammatical understanding or systematicity in even human's closest primate relatives.[1]

Throughout the 20th century the dominant model[2] for language processing in the brain was the Geschwind-Lichteim-Wernicke model, which is based primarily on the analysis of brain damaged patients. However, due to improvements in intra-cortical electrophysiological recordings of monkey and human brains, as well non-invasive techniques such as fMRI, PET, MEG and EEG, a dual auditory pathway[3][4] has been revealed. In accordance with this model, there are two pathways that connect the auditory cortex to the frontal lobe, each pathway accounting for different linguistic roles. The auditory ventral stream connects the auditory cortex with the middle temporal gyrus and temporal pole, which in turn connects with the inferior frontal gyrus. This pathway is responsible for sound recognition, and is accordingly known as the auditory 'what' pathway. The auditory dorsal stream connects the auditory cortex with the parietal lobe, which in turn connects with inferior frontal gyrus. In both humans and non-human primates, the auditory dorsal stream is responsible for sound localization, and is accordingly known as the auditory 'where' pathway. In humans, this pathway (especially in the left hemisphere) is also responsible for speech production, speech repetition, lip-reading, and phonological working memory and long-term memory. In accordance with the 'from where to what' model of language evolution.[5][6] the reason the ADS is characterized with such a broad range of functions is that each indicates a different stage in language evolution.

Neurological mechanism of language processing

History of neurolinguistics

Throughout the 20th century, our knowledge of language processing in the brain was dominated by the Wernicke-Lichtheim-Geschwind model.[7][2][8] This model is primarily based on research conducted on brain-damaged individuals who were reported to possess a variety of language related disorders. In accordance with this model, words are perceived via a specialized word reception center (Wernicke’s area) that is located in the left temporoparietal junction. This region then projects to a word production center (Broca’s area) that is located in the left inferior frontal gyrus. Because almost all language input was thought to funnel via Wernicke’s area and all language output to funnel via Broca’s area, it became extremely difficult to identify the basic properties of each region. This lack of clear definition for the contribution of Wernicke’s and Broca’s regions to human language rendered it extremely difficult to identify their homologues in other primates.[9] With the advent of the MRI and its application for lesion mappings, however, it was shown that this model is based on incorrect correlations between symptoms and lesions.[10][11][12][13][14][15][16] The refutation of such an influential and dominant model opened the door to new models of language processing in the brain.

Anatomy of the auditory ventral and dorsal streams

In the last two decades, significant advances occurred in our understanding of the neural processing of sounds in primates. Initially by recording of neural activity in the auditory cortices of monkeys[17][18] and later elaborated via histological staining[19][20][21] and fMRI scanning studies,[22] 3 auditory fields were identified in the primary auditory cortex, and 9 associative auditory fields were shown to surround them (Figure 1 top left). Anatomical tracing and lesion studies further indicated of a separation between the anterior and posterior auditory fields, with the anterior primary auditory fields (areas R-RT) projecting to the anterior associative auditory fields (areas AL-RTL), and the posterior primary auditory field (area A1) projecting to the posterior associative auditory fields (areas CL-CM).[19][23][24][25] Recently, evidence accumulated that indicates homology between the human and monkey auditory fields. In humans, histological staining studies revealed two separate auditory fields in the primary auditory region of Heschl’s gyrus,[26][27] and by mapping the tonotopic organization of the human primary auditory fields with high resolution fMRI and comparing it to the tonotopic organization of the monkey primary auditory fields, homology was established between the human anterior primary auditory field and monkey area R (denoted in humans as area hR) and the human posterior primary auditory field and the monkey area A1 (denoted in humans as area hA1).[28][29][30][31][32] Intra-cortical recordings from the human auditory cortex further demonstrated similar patterns of connectivity to the auditory cortex of the monkey. Recording from the surface of the auditory cortex (supra-temporal plane) reported that the anterior Heschl’s gyrus (area hR) projects primarily to the middle-anterior superior temporal gyrus (mSTG-aSTG) and the posterior Heschl’s gyrus (area hA1) projects primarily to the posterior superior temporal gyrus (pSTG) and the planum temporale (area PT; Figure 1 top right).[33][34] Consistent with connections from area hR to the aSTG and hA1 to the pSTG is an fMRI study of a patient with impaired sound recognition (auditory agnosia), who was shown with reduced bilateral activation in areas hR and aSTG but with spared activation in the mSTG-pSTG.[35] This connectivity pattern is also corroborated by a study that recorded activation from the lateral surface of the auditory cortex and reported of simultaneous non-overlapping activation clusters in the pSTG and mSTG-aSTG while listening to sounds.[36]

Downstream to the auditory cortex, anatomical tracing studies in monkeys delineated projections from the anterior associative auditory fields (areas AL-RTL) to ventral prefrontal and premotor cortices in the inferior frontal gyrus (IFG)[37][38] and amygdala.[39] Cortical recording and functional imaging studies in macaque monkeys further elaborated on this processing stream by showing that acoustic information flows from the anterior auditory cortex to the temporal pole (TP) and then to the IFG.[40][41][42][43][44][45] This pathway is commonly referred to as the auditory ventral stream (AVS; Figure 1, bottom left-red arrows). In contrast to the anterior auditory fields, tracing studies reported that the posterior auditory fields (areas CL-CM) project primarily to dorsolateral prefrontal and premotor cortices (although some projections do terminate in the IFG.[46][38] Cortical recordings and anatomical tracing studies in monkeys further provided evidence that this processing stream flows from the posterior auditory fields to the frontal lobe via a relay station in the intra-parietal sulcus (IPS).[47][48][49][50][51][52] This pathway is commonly referred to as the auditory dorsal stream (ADS; Figure 1, bottom left-blue arrows). Comparing the white matter pathways involved in communication in humans and monkeys with diffusion tensor imaging techniques indicates of similar connections of the AVS and ADS in the two species (Monkey,[51] Human[53][54][55][56][57][58]). In humans, the pSTG was shown to project to the parietal lobe (sylvian parietal-temporal junction- inferior parietal lobule; Spt-IPL), and from there to dorsolateral prefrontal and premotor cortices (Figure 1, bottom right-blue arrows), and the aSTG was shown to project to the anterior temporal lobe (middle temporal gyrus-temporal pole; MTG-TP) and from there to the IFG (Figure 1 bottom right-red arrows).

Auditory ventral stream

Sound Recognition

Accumulative converging evidence indicates that the AVS is involved in recognizing auditory objects. At the level of the primary auditory cortex, recordings from monkeys showed higher percentage of neurons selective for learned melodic sequences in area R than area A1,[59] and a study in humans demonstrated more selectivity for heard syllables in the anterior Heschl’s gyrus (area hR) than posterior Heshcl’s gyrus (area hA1).[60] In downstream associative auditory fields, studies from both monkeys and humans reported that the border between the anterior and posterior auditory fields (Figure 1-area PC in the monkey and mSTG in the human) processes pitch attributes that are necessary for the recognition of auditory objects.[17] The anterior auditory fields of monkeys were also demonstrated with selectivity for con-specific vocalizations with intra-cortical recordings.[40][18][61] and functional imaging[62][41][42] One fMRI monkey study further demonstrated a role of the aSTG in the recognition of individual voices.[41] The role of the human mSTG-aSTG in sound recognition was demonstrated via functional imaging studies that correlated activity in this region with isolation of auditory objects from background noise,[63][64] and with the recognition of spoken words,[65][66][67][68][69][70][71] voices,[72] melodies,[73][74] environmental sounds,[75][76][77] and non-speech communicative sounds.[78] A Meta-analysis of fMRI studies[79] further demonstrated functional dissociation between the left mSTG and aSTG, with the former processing short speech units (phonemes) and the latter processing longer units (e.g., words, environmental sounds). A study that recorded neural activity directly from the left pSTG and aSTG reported that the aSTG, but not pSTG, was more active when the patient listened to speech in her native language than unfamiliar foreign language.[80] Consistently, electro stimulation to the aSTG of this patient resulted in impaired speech perception[80] (see also[81][82] for similar results). Intra-cortical recordings from the right and left aSTG further demonstrated that speech is processed laterally to music.[80] An fMRI study of a patient with impaired sound recognition (auditory agnosia) due to brainstem damage was also shown with reduced activation in areas hR and aSTG of both hemispheres when hearing spoken words and environmental sounds.[35] Recordings from the anterior auditory cortex of monkeys while maintaining learned sounds in working memory,[45] and the debilitating effect of induced lesions to this region on working memory recall,[83][84][85] further implicate the AVS in maintaining the perceived auditory objects in working memory. In humans, area mSTG-aSTG was also reported active during rehearsal of heard syllables with MEG.[86] and fMRI[87] The latter study further demonstrated that working memory in the AVS is for the acoustic properties of spoken words and that it is independent to working memory in the ADS, which mediates inner speech. Working memory studies in monkeys also suggest that in monkeys, in contrast to humans, the AVS is the dominant working memory store.[88]

In humans, downstream to the aSTG, the MTG and TP are thought to constitute the semantic lexicon, which is a long-term memory repository of audio-visual representations that are interconnected on the basis of semantic relationships. (See also the reviews by[3][4] discussing this topic). The primary evidence for this role of the MTG-TP is that patients with damage to this region (e.g., patients with semantic dementia or herpes simplex virus encephalitis) are reported[89][90] with an impaired ability to describe visual and auditory objects and a tendency to commit semantic errors when naming objects (i.e., semantic paraphasia). Semantic paraphasias were also expressed by aphasic patients with left MTG-TP damage[13][91] and were shown to occur in non-aphasic patients after electro-stimulation to this region.[92][82] or the underlying white matter pathway[93] Two meta-analyses of the fMRI literature also reported that the anterior MTG and TP were consistently active during semantic analysis of speech and text;[65][94] and an intra-cortical recording study correlated neural discharge in the MTG with the comprehension of intelligible sentences.[95]

Sentence comprehension

In addition to extracting meaning from sounds, the MTG-TP region of the AVS appears to have a role in sentence comprehension, possibly by merging concepts together (e.g., merging the concept 'blue' and 'shirt to create the concept of a 'blue shirt'). The role of the MTG in extracting meaning from sentences has been demonstrated in functional imaging studies reporting stronger activation in the anterior MTG when proper sentences are contrasted with lists of words, sentences in a foreign or nonsense language, scrambled sentences, sentences with semantic or syntactic violations and sentence-like sequences of environmental sounds.[96][97][98][99][100][101][102][103] One fMRI study[104] in which participants were instructed to read a story further correlated activity in the anterior MTG with the amount of semantic and syntactic content each sentence contained. An EEG study[105] that contrasted cortical activity while reading sentences with and without syntactic violations in healthy participants and patients with MTG-TP damage, concluded that the MTG-TP in both hemispheres participate in the automatic (rule based) stage of syntactic analysis (ELAN component), and that the left MTG-TP is also involved in a later controlled stage of syntax analysis (P600 component). Patients with damage to the MTG-TP region have also been reported with impaired sentence comprehension.[13][106][107] See review[108] for more information on this topic.

Bilaterality

In contradiction to the Wernicke-Lichtheim-Geschwind model that implicates sound recognition to occur solely in the left hemisphere, studies that examined the properties of the right or left hemisphere in isolation via unilateral hemispheric anesthesia (i.e., the WADA procedure[109]) or intra-cortical recordings from each hemisphere[95] provided evidence that sound recognition is processed bilaterally. Moreover, a study that instructed patients with disconnected hemispheres (i.e., split-brain patients) to match spoken words to written words presented to the right or left hemifields, reported vocabulary in the right hemisphere that almost matches in size with the left hemisphere[110] (The right hemisphere vocabulary was equivalent to the vocabulary of a healthy 11-years old child). This bilateral recognition of sounds is also consistent with the finding that unilateral lesion to the auditory cortex rarely results in deficit to auditory comprehension (i.e., auditory agnosia), whereas a second lesion to the remaining hemisphere (which could occur years later) does.[111][112] Finally, as mentioned earlier, an fMRI scan of an auditory agnosia patient demonstrated bilateral reduced activation in the anterior auditory cortices,[35] and bilateral electro-stimulation to these regions in both hemispheres resulted with impaired speech recognition.[80]

Auditory dorsal stream

Sound localization

The most established role of the ADS is with audiospatial processing. This is evidenced via studies that recorded neural activity from the auditory cortex of monkeys, and correlated the strongest selectivity to changes in sound location with the posterior auditory fields (areas CM-CL), intermediate selectivity with primary area A1, and very weak selectivity with the anterior auditory fields.[113][114][18][115][116] In humans, behavioral studies of brain damaged patients[117][118] and EEG recordings from healthy participants[119] demonstrated that sound localization is processed independently of sound recognition, and thus is likely independent of processing in the AVS. Consistently, a working memory study[120] reported two independent working memory storage spaces, one for acoustic properties and one for locations. Functional imaging studies that contrasted sound discrimination and sound[121][122][123][124][77][125] localization reported a correlation between sound discrimination and activation in the mSTG-aSTG, and correlation between sound localization and activation in the pSTG and PT, with some studies further reporting of activation in the Spt-IPL region and frontal lobe.[126][76][127] Some fMRI studies also reported that the activation in the pSTG and Spt-IPL regions increased when individuals perceived sounds in motion.[128][129][130] EEG studies using source-localization also identified the pSTG-Spt region of the ADS as the sound localization processing center[131][132] A combined fMRI and MEG study corroborated the role of the ADS with audiospatial processing by demonstrating that changes in sound location resulted in activation spreading from Heschl’s gyrus posteriorly along the pSTG and terminating in the IPL.[133] In another MEG study, the IPL and frontal lobe were shown active during maintenance of sound locations in working memory.[134]

Guidance of eye movements

In addition to localizing sounds, the ADS appears also to encode the sound location in memory, and to use this information for guiding eye movements. Evidence for the role of the ADS in encoding sounds into working memory is provided via studies that trained monkeys in a delayed matching to sample task, and reported of activation in areas CM-CL[135] and IPS[136][137] during the delay phase. Influence of this spatial information on eye movements occurs via projections of the ADS into the frontal eye field (FEF; a premotor area that is responsible for guiding eye movements) located in the frontal lobe. This is demonstrated with anatomical tracing studies that reported of connections between areas CM-CL-IPS and the FEF,[46][138] and electro-physiological recordings that reported neural activity in both the IPS[136][137][139][138] and the FEF[140][141] prior to conducting saccadic eye-movements toward auditory targets.

Integration of locations with auditory objects

A surprising function of the ADS is with the discrimination and possible identification of sounds, which is commonly ascribed with the anterior STG and STS of the AVS. However, electrophysiological recordings from the posterior auditory cortex (areas CM-CL),[142][115] and IPS of monkeys,[143] as well a PET monkey study[144] reported of neurons that are selective to monkey vocalizations. One of these studies[115] also reported of neurons in areas CM-CL that are characterized with dual selectivity for both a vocalization and a sound location. A monkey study that recorded electrophysiological activity from neurons in the posterior insula also reported of neurons that discriminate monkey calls based on the identity of the speaker.[145] Similarly, human fMRI studies that instructed participants to discriminate voices reported an activation cluster in the pSTG.[146][147][148] A study that recorded activity from the auditory cortex of an epileptic patient further reported that the pSTG, but not aSTG, was selective for the presence of a new speaker.[80] A study that scanned fetuses in their third trimester of pregnancy with fMRI further reported of activation in area Spt when the hearing of voices was contrasted to pure tones.[149] The researchers also reported that a sub-region of area Spt was more selective to their mother’s voice than unfamiliar female voices. This study thus suggests that the ADS is capable of identifying voices in addition to discriminating them.

The manner in which sound recognition in the pSTG-PT-Spt regions of the ADS differs from sound recognition in the anterior STG and STS of the AVS[146][72][150][40][41] was shown via electro-stimulation of an epileptic patient.[80] This study reported that electro-stimulation of the aSTG resulted in changes in the perceived pitch of voices (including the patient’s own voice), whereas electro-stimulation of the pSTG resulted in reports that her voice was “drifting away.” This report indicates a role for the pSTG in the integration of sound location with an individual voice. Consistent with this role of the ADS is a study that reported patients, with AVS damage but spared ADS (surgical removal of the anterior STG/MTG), were no longer capable of isolating environmental sounds in the contralesional space, whereas their ability of isolating and discriminating human voices remained intact.[151] Supporting a role for the pSTG-PT-Spt of the ADS with integrating auditory objects with sound locations are also studies that demonstrate a role for this region in the isolation of specific sounds. For example, two functional imaging studies correlated circumscribed pSTG-PT activation with the spreading of sounds into an increasing number of locations.[152][153] Accordingly, an fMRI study correlated the perception of acoustic cues that are necessary for separating musical sounds (pitch chroma) with pSTG-PT activation.[125]

Integration of phonemes with lip-movements

Although sound perception is primarily ascribed with the AVS, the ADS appears associated with several aspects of speech perception. For instance, in a meta-analysis of fMRI studies[154] in which the auditory perception of phonemes was contrasted with closely matching sounds, and the studies were rated for the required level of attention, the authors concluded that attention to phonemes correlates with strong activation in the pSTG-pSTS region. An intra-cortical recording study in which participants were instructed to identify syllables also correlated the hearing of each syllable with its own activation pattern in the pSTG.[155] Consistent with the role of the ADS in discriminating phonemes,[154] studies have ascribed the integration of phonemes and their corresponding lip movements (i.e., visemes) to the pSTS of the ADS. For example, an fMRI study[156] has correlated activation in the pSTS with the McGurk illusion (in which hearing the syllable “ba” while seeing the viseme “ga” results in the perception of the syllable “da”). Another study has found that using magnetic stimulation to interfere with processing in this area further disrupts the McGurk illusion.[157] The association of the pSTS with the audio-visual integration of speech has also been demonstrated in a study that presented participants with pictures of faces and spoken words of varying quality. The study reported that the pSTS selects for the combined increase of the clarity of faces and spoken words.[158] Corroborating evidence has been provided by an fMRI study[159] that contrasted the perception of audio-visual speech with audio-visual non-speech (pictures and sounds of tools). This study reported the detection of speech-selective compartments in the pSTS. In addition, an fMRI study[160] that contrasted congruent audio-visual speech with incongruent speech (pictures of still faces) reported pSTS activation. For a review presenting additional converging evidence regarding the role of the pSTS and ADS in phoneme-viseme integration see.[161]

Phonological long-term memory

A growing body of evidence indicates that humans, in addition to having a long-term store for word meanings located in the MTG-TP of the AVS (i.e., the semantic lexicon), also have a long-term store for the names of objects located in the Spt-IPL region of the ADS (i.e., the phonological lexicon). For example, a study[162][163] examining patients with damage to the AVS (MTG damage) or damage to the ADS (IPL damage) reported that MTG damage results in individuals incorrectly identifying objects (e.g., calling a “goat” a “sheep,” an example of semantic paraphasia). Conversely, IPL damage results in individuals correctly identifying the object but incorrectly pronouncing its name (e.g., saying “gof” instead of “goat,” an example of phonemic paraphasia). Semantic paraphasia errors have also been reported in patients receiving intra-cortical electrical stimulation of the AVS (MTG), and phonemic paraphasia errors have been reported in patients whose ADS (pSTG, Spt, and IPL) received intra-cortical electrical stimulation.[82][164][93] Further supporting the role of the ADS in object naming is an MEG study that localized activity in the IPL during the learning and during the recall of object names.[165] A study that induced magnetic interference in participants’ IPL while they answered questions about an object reported that the participants were capable of answering questions regarding the object’s characteristics or perceptual attributes but were impaired when asked whether the word contained two or three syllables.[166] An MEG study has also correlated recovery from anomia (a disorder characterized by an impaired ability to name objects) with changes in IPL activation.[167] Further supporting the role of the IPL in encoding the sounds of words are studies reporting that, compared to monolinguals, bilinguals have greater cortical density in the IPL but not the MTG.[168][169] Because evidence shows that, in bilinguals, different phonological representations of the same word share the same semantic representation,[170] this increase in density in the IPL verifies the existence of the phonological lexicon: the semantic lexicon of bilinguals is expected to be similar in size to the semantic lexicon of monolinguals, whereas their phonological lexicon should be twice the size. Consistent with this finding, cortical density in the IPL of monolinguals also correlates with vocabulary size.[171][172] Notably, the functional dissociation of the AVS and ADS in object-naming tasks is supported by cumulative evidence from reading research showing that semantic errors are correlated with MTG impairment and phonemic errors with IPL impairment. Based on these associations, the semantic analysis of text has been linked to the inferior-temporal gyrus and MTG, and the phonological analysis of text has been linked to the pSTG-Spt- IPL[173][174][175]

Phonological working memory

Working memory is often treated as the temporary activation of the representations stored in long-term memory that are used for speech (phonological representations). This sharing of resources between working memory and speech is evident by the finding[176][177] that speaking during rehearsal results in a significant reduction in the number of items that can be recalled from working memory (articulatory suppression). The involvement of the phonological lexicon in working memory is also evidenced by the tendency of individuals to make more errors when recalling words from a recently learned list of phonologically similar words than from a list of phonologically dissimilar words (the phonological similarity effect).[176] Studies have also found that speech errors committed during reading are remarkably similar to speech errors made during the recall of recently learned, phonologically similar words from working memory.[178] Patients with IPL damage have also been observed to exhibit both speech production errors and impaired working memory[179][180][181][182] Finally, the view that verbal working memory is the result of temporarily activating phonological representations in the ADS is compatible with recent models describing working memory as the combination of maintaining representations in the mechanism of attention in parallel to temporarily activating representations in long-term memory.[177][183][184][185] It has been argued that the role of the ADS in the rehearsal of lists of words is the reason this pathway is active during sentence comprehension[186] For a review of the role of the ADS in working memory, see.[187]

The 'from where to what' model of language evolution hypotheses 7 stages of language evolution: 1. The origin of speech is the exchange of contact calls between mothers and offspring used to relocate each other in cases of separation. 2. Offspring of early Homo modified the contact calls with intonations in order to emit two types of contact calls: contact calls that signal low level of distress and contact calls that signal high-level of distress. 3. The use of two types of contact calls enabled the first question-answer conversation. In this scenario, the offspring emits a low-level distress call to express a desire to interact with an object, and the mother responds with a low-level distress call to enable the interaction or high-level distress call to prohibit it. 4. The use of intonations improved over time, and eventually, individuals acquired sufficient vocal control to invent new words to objects. 5. At first, offspring learned the calls from their parents by imitating their lip-movements. 6. As the learning of calls improved, babies learned new calls (i.e., phonemes) through lip imitation only during infancy. After that period, the memory of phonemes lasted for a lifetime, and older children became capable of learning new calls (through mimicry) without observing their parents' lip-movements. 7. Individuals became capable of rehearsing sequences of calls. This enabled the learning of words with several syllables, which increased vocabulary size. Further developments to the brain circuit responsible for rehearsing poly-syllabic words resulted with individuals capable of rehearsing lists of words (phonological working memory), which served as the platform for communication with sentences. Based on the papers:

Evolution of language

It is presently unknown why so many functions are ascribed to the human ADS. An attempt to unify these functions under a single framework was conducted in the ‘From where to what’ model of language evolution.[5][6] In accordance with this model, each function of the ADS indicates of a different intermediate phase in the evolution of language. The roles of sound localization and integration of sound location with voices and auditory objects is interpreted as evidence that the origin of speech is the exchange of contact calls (calls used to report location in cases of separation) between mothers and offspring. The role of the ADS in the perception and production of intonations is interpreted as evidence that speech began by modifying the contact calls with intonations, possibly for distinguishing alarm contact calls from safe contact calls. The role of the ADS in encoding the names of objects (phonological long-term memory) is interpreted as evidence of gradual transition from modifying calls with intonations to complete vocal control. The role of the ADS in the integration of lip movements with phonemes and in speech repetition is interpreted as evidence that spoken words were learned by infants mimicking their parents’ vocalizations, initiailly by imitating their lip movements. The role of the ADS in phonological working memory is interpreted as evidence that the words learned through mimicry remained active in the ADS even when not spoken. This resulted with individuals capable of rehearsing a list of vocalizations, which enabled the production of words with several syllables. Further developments in the ADS enabled the rehearsal of lists of words, which provided the infra-structure for communicating with sentences.

See also

References

  1. ^ Seidenberg MS, Petitto LA (1987). "Communication, symbolic communication, and language: Comment on Savage-Rumbaugh, McDonald, Sevcik, Hopkins, and Rupert (1986)". Journal of Experimental Psychology: General. 116 (3): 279–287. doi:10.1037/0096-3445.116.3.279.
  2. ^ a b Geschwind N (September 1965). "Disconnexion syndromes in animals and man". Brain. 88 (2): 237–237. doi:10.1093/brain/88.2.237.
  3. ^ a b Hickok G, Poeppel D (May 2007). "The cortical organization of speech processing". Nature Reviews. Neuroscience. 8 (5): 393–402. doi:10.1038/nrn2113. PMID 17431404.
  4. ^ a b Gow DW (June 2012). "The cortical organization of lexical knowledge: a dual lexicon model of spoken language processing". Brain and Language. 121 (3): 273–88. doi:10.1016/j.bandl.2012.03.005. PMID 22498237.
  5. ^ a b Poliva O (2017-09-20). "From where to what: a neuroanatomically based evolutionary model of the emergence of speech in humans". F1000Research. 4: 67. doi:10.12688/f1000research.6175.3. PMC 5600004. PMID 28928931. CC-BY icon.svg Material was copied from this source, which is available under a Creative Commons Attribution 4.0 International License.
  6. ^ a b Poliva O (2016). "From Mimicry to Language: A Neuroanatomically Based Evolutionary Model of the Emergence of Vocal Language". Frontiers in Neuroscience. 10: 307. doi:10.3389/fnins.2016.00307. PMC 4928493. PMID 27445676. CC-BY icon.svg Material was copied from this source, which is available under a Creative Commons Attribution 4.0 International License.
  7. ^ Lichteim L (1885-01-01). "On Aphasia". Brain. 7 (4): 433–484. doi:10.1093/brain/7.4.433.
  8. ^ Wernicke C (1974). Der aphasische Symptomenkomplex. Springer Berlin Heidelberg. pp. 1–70. ISBN 978-3-540-06905-8.
  9. ^ Aboitiz F, García VR (December 1997). "The evolutionary origin of the language areas in the human brain. A neuroanatomical perspective". Brain Research. Brain Research Reviews. 25 (3): 381–96. doi:10.1016/s0165-0173(97)00053-2. PMID 9495565.
  10. ^ Anderson JM, Gilmore R, Roper S, Crosson B, Bauer RM, Nadeau S, Beversdorf DQ, Cibula J, Rogish M, Kortencamp S, Hughes JD, Gonzalez Rothi LJ, Heilman KM (October 1999). "Conduction aphasia and the arcuate fasciculus: A reexamination of the Wernicke-Geschwind model". Brain and Language. 70 (1): 1–12. doi:10.1006/brln.1999.2135. PMID 10534369.
  11. ^ DeWitt I, Rauschecker JP (November 2013). "Wernicke's area revisited: parallel streams and word processing". Brain and Language. 127 (2): 181–91. doi:10.1016/j.bandl.2013.09.014. PMID 24404576.
  12. ^ Dronkers NF (January 2000). "The pursuit of brain-language relationships". Brain and Language. 71 (1): 59–61. doi:10.1006/brln.1999.2212. PMID 10716807.
  13. ^ a b c Dronkers NF, Wilkins DP, Van Valin RD, Redfern BB, Jaeger JJ (May 2004). "Lesion analysis of the brain areas involved in language comprehension". Cognition. 92 (1–2): 145–77. doi:10.1016/j.cognition.2003.11.002. PMID 15037129.
  14. ^ Mesulam MM, Thompson CK, Weintraub S, Rogalski EJ (August 2015). "The Wernicke conundrum and the anatomy of language comprehension in primary progressive aphasia". Brain. 138 (Pt 8): 2423–37. doi:10.1093/brain/awv154. PMID 26112340.
  15. ^ Poeppel D, Emmorey K, Hickok G, Pylkkänen L (October 2012). "Towards a new neurobiology of language". The Journal of Neuroscience. 32 (41): 14125–31. doi:10.1523/jneurosci.3244-12.2012. PMID 23055482.
  16. ^ Vignolo LA, Boccardi E, Caverni L (March 1986). "Unexpected CT-scan findings in global aphasia". Cortex; A Journal Devoted to the Study of the Nervous System and Behavior. 22 (1): 55–69. doi:10.1016/s0010-9452(86)80032-6. PMID 2423296.
  17. ^ a b Bendor D, Wang X (August 2006). "Cortical representations of pitch in monkeys and humans". Current Opinion in Neurobiology. 16 (4): 391–9. doi:10.1016/j.conb.2006.07.001. PMID 16842992.
  18. ^ a b c Rauschecker JP, Tian B, Hauser M (April 1995). "Processing of complex sounds in the macaque nonprimary auditory cortex". Science. 268 (5207): 111–4. doi:10.1126/science.7701330. PMID 7701330.
  19. ^ a b de la Mothe LA, Blumell S, Kajikawa Y, Hackett TA (May 2006). "Cortical connections of the auditory cortex in marmoset monkeys: core and medial belt regions". The Journal of Comparative Neurology. 496 (1): 27–71. doi:10.1002/cne.20923. PMID 16528722.
  20. ^ de la Mothe LA, Blumell S, Kajikawa Y, Hackett TA (May 2012). "Cortical connections of auditory cortex in marmoset monkeys: lateral belt and parabelt regions". Anatomical Record. 295 (5): 800–21. doi:10.1002/ar.22451. PMID 22461313.
  21. ^ Kaas JH, Hackett TA (October 2000). "Subdivisions of auditory cortex and processing streams in primates". Proceedings of the National Academy of Sciences of the United States of America. 97 (22): 11793–9. doi:10.1073/pnas.97.22.11793. PMID 11050211.
  22. ^ Petkov CI, Kayser C, Augath M, Logothetis NK (July 2006). "Functional imaging reveals numerous fields in the monkey auditory cortex". PLoS Biology. 4 (7): e215. doi:10.1371/journal.pbio.0040215. PMID 16774452.
  23. ^ Morel A, Garraghty PE, Kaas JH (September 1993). "Tonotopic organization, architectonic fields, and connections of auditory cortex in macaque monkeys". The Journal of Comparative Neurology. 335 (3): 437–59. doi:10.1002/cne.903350312. PMID 7693772.
  24. ^ Rauschecker JP, Tian B (October 2000). "Mechanisms and streams for processing of "what" and "where" in auditory cortex". Proceedings of the National Academy of Sciences of the United States of America. 97 (22): 11800–6. doi:10.1073/pnas.97.22.11800. PMID 11050212.
  25. ^ Rauschecker JP, Tian B, Pons T, Mishkin M (May 1997). "Serial and parallel processing in rhesus monkey auditory cortex". The Journal of Comparative Neurology. 382 (1): 89–103. doi:10.1002/(sici)1096-9861(19970526)382:1<89::aid-cne6>3.3.co;2-y. PMID 9136813.
  26. ^ Sweet RA, Dorph-Petersen KA, Lewis DA (October 2005). "Mapping auditory core, lateral belt, and parabelt cortices in the human superior temporal gyrus". The Journal of Comparative Neurology. 491 (3): 270–89. doi:10.1002/cne.20702. PMID 16134138.
  27. ^ Wallace MN, Johnston PW, Palmer AR (April 2002). "Histochemical identification of cortical areas in the auditory region of the human brain". Experimental Brain Research. 143 (4): 499–508. doi:10.1007/s00221-002-1014-z. PMID 11914796.
  28. ^ Da Costa S, van der Zwaag W, Marques JP, Frackowiak RS, Clarke S, Saenz M (October 2011). "Human primary auditory cortex follows the shape of Heschl's gyrus". The Journal of Neuroscience. 31 (40): 14067–75. doi:10.1523/jneurosci.2000-11.2011. PMID 21976491.
  29. ^ Humphries C, Liebenthal E, Binder JR (April 2010). "Tonotopic organization of human auditory cortex". NeuroImage. 50 (3): 1202–11. doi:10.1016/j.neuroimage.2010.01.046. PMID 20096790.
  30. ^ Langers DR, van Dijk P (September 2012). "Mapping the tonotopic organization in human auditory cortex with minimally salient acoustic stimulation". Cerebral Cortex. 22 (9): 2024–38. doi:10.1093/cercor/bhr282. PMID 21980020.
  31. ^ Striem-Amit E, Hertz U, Amedi A (March 2011). "Extensive cochleotopic mapping of human auditory cortical fields obtained with phase-encoding FMRI". PLOS One. 6 (3): e17832. doi:10.1371/journal.pone.0017832. PMID 21448274.
  32. ^ Woods DL, Herron TJ, Cate AD, Yund EW, Stecker GC, Rinne T, Kang X (2010). "Functional properties of human auditory cortical fields". Frontiers in Systems Neuroscience. 4: 155. doi:10.3389/fnsys.2010.00155. PMID 21160558.
  33. ^ Gourévitch B, Le Bouquin Jeannès R, Faucon G, Liégeois-Chauvel C (March 2008). "Temporal envelope processing in the human auditory cortex: response and interconnections of auditory cortical areas". Hearing Research. 237 (1–2): 1–18. doi:10.1016/j.heares.2007.12.003. PMID 18255243.
  34. ^ Guéguin M, Le Bouquin-Jeannès R, Faucon G, Chauvel P, Liégeois-Chauvel C (February 2007). "Evidence of functional connectivity between auditory cortical areas revealed by amplitude modulation sound processing". Cerebral Cortex. 17 (2): 304–13. doi:10.1093/cercor/bhj148. PMID 16514106.
  35. ^ a b c Poliva O, Bestelmeyer PE, Hall M, Bultitude JH, Koller K, Rafal RD (September 2015). "Functional Mapping of the Human Auditory Cortex: fMRI Investigation of a Patient with Auditory Agnosia from Trauma to the Inferior Colliculus". Cognitive and Behavioral Neurology. 28 (3): 160–80. doi:10.1097/wnn.0000000000000072. PMID 26413744.
  36. ^ Chang EF, Edwards E, Nagarajan SS, Fogelson N, Dalal SS, Canolty RT, Kirsch HE, Barbaro NM, Knight RT (June 2011). "Cortical spatio-temporal dynamics underlying phonological target detection in humans". Journal of Cognitive Neuroscience. 23 (6): 1437–46. doi:10.1162/jocn.2010.21466. PMID 20465359.
  37. ^ Muñoz M, Mishkin M, Saunders RC (September 2009). "Resection of the medial temporal lobe disconnects the rostral superior temporal gyrus from some of its projection targets in the frontal lobe and thalamus". Cerebral Cortex. 19 (9): 2114–30. doi:10.1093/cercor/bhn236. PMID 19150921.
  38. ^ a b Romanski LM, Bates JF, Goldman-Rakic PS (January 1999). "Auditory belt and parabelt projections to the prefrontal cortex in the rhesus monkey". The Journal of Comparative Neurology. 403 (2): 141–57. doi:10.1002/(sici)1096-9861(19990111)403:2<141::aid-cne1>3.0.co;2-v. PMID 9886040.
  39. ^ Tanaka D (June 1976). "Thalamic projections of the dorsomedial prefrontal cortex in the rhesus monkey (Macaca mulatta)". Brain Research. 110 (1): 21–38. doi:10.1016/0006-8993(76)90206-7. PMID 819108.
  40. ^ a b c Perrodin C, Kayser C, Logothetis NK, Petkov CI (August 2011). "Voice cells in the primate temporal lobe". Current Biology. 21 (16): 1408–15. doi:10.1016/j.cub.2011.07.028. PMID 21835625.
  41. ^ a b c d Petkov CI, Kayser C, Steudel T, Whittingstall K, Augath M, Logothetis NK (March 2008). "A voice region in the monkey brain". Nature Neuroscience. 11 (3): 367–74. doi:10.1038/nn2043. PMID 18264095.
  42. ^ a b Poremba A, Malloy M, Saunders RC, Carson RE, Herscovitch P, Mishkin M (January 2004). "Species-specific calls evoke asymmetric activity in the monkey's temporal poles". Nature. 427 (6973): 448–51. doi:10.1038/nature02268. PMID 14749833.
  43. ^ Romanski LM, Averbeck BB, Diltz M (February 2005). "Neural representation of vocalizations in the primate ventrolateral prefrontal cortex". Journal of Neurophysiology. 93 (2): 734–47. doi:10.1152/jn.00675.2004. PMID 15371495.
  44. ^ Russ BE, Ackelson AL, Baker AE, Cohen YE (January 2008). "Coding of auditory-stimulus identity in the auditory non-spatial processing stream". Journal of Neurophysiology. 99 (1): 87–95. doi:10.1152/jn.01069.2007. PMID 18003874.
  45. ^ a b Tsunada J, Lee JH, Cohen YE (June 2011). "Representation of speech categories in the primate auditory cortex". Journal of Neurophysiology. 105 (6): 2634–46. doi:10.1152/jn.00037.2011. PMID 21346209.
  46. ^ a b Cusick CG, Seltzer B, Cola M, Griggs E (September 1995). "Chemoarchitectonics and corticocortical terminations within the superior temporal sulcus of the rhesus monkey: evidence for subdivisions of superior temporal polysensory cortex". The Journal of Comparative Neurology. 360 (3): 513–35. doi:10.1002/cne.903600312. PMID 8543656.
  47. ^ Cohen YE, Russ BE, Gifford GW, Kiringoda R, MacLean KA (December 2004). "Selectivity for the spatial and nonspatial attributes of auditory stimuli in the ventrolateral prefrontal cortex". The Journal of Neuroscience. 24 (50): 11307–16. doi:10.1523/jneurosci.3935-04.2004. PMID 15601937.
  48. ^ Deacon TW (February 1992). "Cortical connections of the inferior arcuate sulcus cortex in the macaque brain". Brain Research. 573 (1): 8–26. doi:10.1016/0006-8993(92)90109-m. ISSN 0006-8993.
  49. ^ Lewis JW, Van Essen DC (December 2000). "Corticocortical connections of visual, sensorimotor, and multimodal processing areas in the parietal lobe of the macaque monkey". The Journal of Comparative Neurology. 428 (1): 112–37. doi:10.1002/1096-9861(20001204)428:1<112::aid-cne8>3.0.co;2-9. PMID 11058227.
  50. ^ Roberts AC, Tomic DL, Parkinson CH, Roeling TA, Cutter DJ, Robbins TW, Everitt BJ (May 2007). "Forebrain connectivity of the prefrontal cortex in the marmoset monkey (Callithrix jacchus): an anterograde and retrograde tract-tracing study". The Journal of Comparative Neurology. 502 (1): 86–112. doi:10.1002/cne.21300. PMID 17335041.
  51. ^ a b Schmahmann JD, Pandya DN, Wang R, Dai G, D'Arceuil HE, de Crespigny AJ, Wedeen VJ (March 2007). "Association fibre pathways of the brain: parallel observations from diffusion spectrum imaging and autoradiography". Brain. 130 (Pt 3): 630–53. doi:10.1093/brain/awl359. PMID 17293361.
  52. ^ Seltzer B, Pandya DN (July 1984). "Further observations on parieto-temporal connections in the rhesus monkey". Experimental Brain Research. 55 (2): 301–12. doi:10.1007/bf00237280. PMID 6745368.
  53. ^ Catani M, Jones DK, ffytche DH (January 2005). "Perisylvian language networks of the human brain". Annals of Neurology. 57 (1): 8–16. doi:10.1002/ana.20319. PMID 15597383.
  54. ^ Frey S, Campbell JS, Pike GB, Petrides M (November 2008). "Dissociating the human language pathways with high angular resolution diffusion fiber tractography". The Journal of Neuroscience. 28 (45): 11435–44. doi:10.1523/jneurosci.2388-08.2008. PMID 18987180.
  55. ^ Makris N, Papadimitriou GM, Kaiser JR, Sorg S, Kennedy DN, Pandya DN (April 2009). "Delineation of the middle longitudinal fascicle in humans: a quantitative, in vivo, DT-MRI study". Cerebral Cortex. 19 (4): 777–85. doi:10.1093/cercor/bhn124. PMID 18669591.
  56. ^ Menjot de Champfleur N, Lima Maldonado I, Moritz-Gasser S, Machi P, Le Bars E, Bonafé A, Duffau H (January 2013). "Middle longitudinal fasciculus delineation within language pathways: a diffusion tensor imaging study in human". European Journal of Radiology. 82 (1): 151–7. doi:10.1016/j.ejrad.2012.05.034. PMID 23084876.
  57. ^ Turken AU, Dronkers NF (2011). "The neural architecture of the language comprehension network: converging evidence from lesion and connectivity analyses". Frontiers in Systems Neuroscience. 5: 1. doi:10.3389/fnsys.2011.00001. PMID 21347218.
  58. ^ Saur D, Kreher BW, Schnell S, Kümmerer D, Kellmeyer P, Vry MS, Umarova R, Musso M, Glauche V, Abel S, Huber W, Rijntjes M, Hennig J, Weiller C (November 2008). "Ventral and dorsal pathways for language". Proceedings of the National Academy of Sciences of the United States of America. 105 (46): 18035–40. doi:10.1073/pnas.0805234105. PMID 19004769.
  59. ^ Yin P, Mishkin M, Sutter M, Fritz JB (December 2008). "Early stages of melody processing: stimulus-sequence and task-dependent neuronal activity in monkey auditory cortical fields A1 and R". Journal of Neurophysiology. 100 (6): 3009–29. doi:10.1152/jn.00828.2007. PMID 18842950.
  60. ^ Steinschneider M, Volkov IO, Fishman YI, Oya H, Arezzo JC, Howard MA (February 2005). "Intracortical responses in human and monkey primary auditory cortex support a temporal processing mechanism for encoding of the voice onset time phonetic parameter". Cerebral Cortex. 15 (2): 170–86. doi:10.1093/cercor/bhh120. PMID 15238437.
  61. ^ Russ BE, Ackelson AL, Baker AE, Cohen YE (January 2008). "Coding of auditory-stimulus identity in the auditory non-spatial processing stream". Journal of Neurophysiology. 99 (1): 87–95. doi:10.1152/jn.01069.2007. PMID 18003874.
  62. ^ Joly O, Pallier C, Ramus F, Pressnitzer D, Vanduffel W, Orban GA (September 2012). "Processing of vocalizations in humans and monkeys: a comparative fMRI study". NeuroImage. 62 (3): 1376–89. doi:10.1016/j.neuroimage.2012.05.070. PMID 22659478.
  63. ^ Scheich H, Baumgart F, Gaschler-Markefski B, Tegeler C, Tempelmann C, Heinze HJ, Schindler F, Stiller D (February 1998). "Functional magnetic resonance imaging of a human auditory cortex area involved in foreground-background decomposition". The European Journal of Neuroscience. 10 (2): 803–9. doi:10.1046/j.1460-9568.1998.00086.x. PMID 9749748.
  64. ^ Zatorre RJ, Bouffard M, Belin P (April 2004). "Sensitivity to auditory object features in human temporal neocortex". The Journal of Neuroscience. 24 (14): 3637–42. doi:10.1523/jneurosci.5458-03.2004. PMID 15071112.
  65. ^ a b Binder JR, Desai RH, Graves WW, Conant LL (December 2009). "Where is the semantic system? A critical review and meta-analysis of 120 functional neuroimaging studies". Cerebral Cortex. 19 (12): 2767–96. doi:10.1093/cercor/bhp055. PMID 19329570.
  66. ^ Davis MH, Johnsrude IS (April 2003). "Hierarchical processing in spoken language comprehension". The Journal of Neuroscience. 23 (8): 3423–31. doi:10.1523/jneurosci.23-08-03423.2003. PMID 12716950.
  67. ^ Liebenthal E, Binder JR, Spitzer SM, Possing ET, Medler DA (October 2005). "Neural substrates of phonemic perception". Cerebral Cortex. 15 (10): 1621–31. doi:10.1093/cercor/bhi040. PMID 15703256.
  68. ^ Narain C, Scott SK, Wise RJ, Rosen S, Leff A, Iversen SD, Matthews PM (December 2003). "Defining a left-lateralized response specific to intelligible speech using fMRI". Cerebral Cortex. 13 (12): 1362–8. doi:10.1093/cercor/bhg083. PMID 14615301.
  69. ^ Obleser J, Boecker H, Drzezga A, Haslinger B, Hennenlotter A, Roettinger M, Eulitz C, Rauschecker JP (July 2006). "Vowel sound extraction in anterior superior temporal cortex". Human Brain Mapping. 27 (7): 562–71. doi:10.1002/hbm.20201. PMID 16281283.
  70. ^ Obleser J, Zimmermann J, Van Meter J, Rauschecker JP (October 2007). "Multiple stages of auditory speech perception reflected in event-related FMRI". Cerebral Cortex. 17 (10): 2251–7. doi:10.1093/cercor/bhl133. PMID 17150986.
  71. ^ Scott SK, Blank CC, Rosen S, Wise RJ (December 2000). "Identification of a pathway for intelligible speech in the left temporal lobe". Brain. 123 Pt 12 (12): 2400–6. doi:10.1093/brain/123.12.2400. PMID 11099443.
  72. ^ a b Belin P, Zatorre RJ (November 2003). "Adaptation to speaker's voice in right anterior temporal lobe". NeuroReport. 14 (16): 2105–2109. doi:10.1097/00001756-200311140-00019. PMID 14600506.
  73. ^ Benson RR, Whalen DH, Richardson M, Swainson B, Clark VP, Lai S, Liberman AM (September 2001). "Parametrically dissociating speech and nonspeech perception in the brain using fMRI". Brain and Language. 78 (3): 364–96. doi:10.1006/brln.2001.2484. PMID 11703063.
  74. ^ Leaver AM, Rauschecker JP (June 2010). "Cortical representation of natural complex sounds: effects of acoustic features and auditory object category". The Journal of Neuroscience. 30 (22): 7604–12. doi:10.1523/jneurosci.0296-10.2010. PMID 20519535.
  75. ^ Lewis JW, Phinney RE, Brefczynski-Lewis JA, DeYoe EA (August 2006). "Lefties get it "right" when hearing tool sounds". Journal of Cognitive Neuroscience. 18 (8): 1314–30. doi:10.1162/jocn.2006.18.8.1314. PMID 16859417.
  76. ^ a b Maeder PP, Meuli RA, Adriani M, Bellmann A, Fornari E, Thiran JP, Pittet A, Clarke S (October 2001). "Distinct pathways involved in sound recognition and localization: a human fMRI study". NeuroImage. 14 (4): 802–16. doi:10.1006/nimg.2001.0888. PMID 11554799.
  77. ^ a b Viceic D, Fornari E, Thiran JP, Maeder PP, Meuli R, Adriani M, Clarke S (November 2006). "Human auditory belt areas specialized in sound recognition: a functional magnetic resonance imaging study". NeuroReport. 17 (16): 1659–62. doi:10.1097/01.wnr.0000239962.75943.dd. PMID 17047449.
  78. ^ Shultz S, Vouloumanos A, Pelphrey K (May 2012). "The superior temporal sulcus differentiates communicative and noncommunicative auditory signals". Journal of Cognitive Neuroscience. 24 (5): 1224–32. doi:10.1162/jocn_a_00208. PMID 22360624.
  79. ^ DeWitt I, Rauschecker JP (February 2012). "Phoneme and word recognition in the auditory ventral stream". Proceedings of the National Academy of Sciences of the United States of America. 109 (8): E505–14. doi:10.1073/pnas.1113427109. PMID 22308358.
  80. ^ a b c d e f Lachaux JP, Jerbi K, Bertrand O, Minotti L, Hoffmann D, Schoendorff B, Kahane P (October 2007). "A blueprint for real-time functional mapping via human intracranial recordings". PLOS One. 2 (10): e1094. doi:10.1371/journal.pone.0001094. PMID 17971857.
  81. ^ Matsumoto R, Imamura H, Inouchi M, Nakagawa T, Yokoyama Y, Matsuhashi M, Mikuni N, Miyamoto S, Fukuyama H, Takahashi R, Ikeda A (April 2011). "Left anterior temporal cortex actively engages in speech perception: A direct cortical stimulation study". Neuropsychologia. 49 (5): 1350–1354. doi:10.1016/j.neuropsychologia.2011.01.023. PMID 21251921.
  82. ^ a b c Roux FE, Miskin K, Durand JB, Sacko O, Réhault E, Tanova R, Démonet JF (October 2015). "Electrostimulation mapping of comprehension of auditory and visual words". Cortex; A Journal Devoted to the Study of the Nervous System and Behavior. 71: 398–408. doi:10.1016/j.cortex.2015.07.001. PMID 26332785.
  83. ^ Fritz J, Mishkin M, Saunders RC (June 2005). "In search of an auditory engram". Proceedings of the National Academy of Sciences of the United States of America. 102 (26): 9359–64. doi:10.1073/pnas.0503998102. PMID 15967995.
  84. ^ Stepien LS, Cordeau JP, Rasmussen T (1960). "The effect of temporal lobe and hippocampal lesions on auditory and visual recent memory in monkeys". Brain. 83 (3): 470–489. doi:10.1093/brain/83.3.470. ISSN 0006-8950.
  85. ^ Strominger NL, Oesterreich RE, Neff WD (June 1980). "Sequential auditory and visual discriminations after temporal lobe ablation in monkeys". Physiology & Behavior. 24 (6): 1149–56. doi:10.1016/0031-9384(80)90062-1. PMID 6774349.
  86. ^ Kaiser J, Ripper B, Birbaumer N, Lutzenberger W (October 2003). "Dynamics of gamma-band activity in human magnetoencephalogram during auditory pattern working memory". NeuroImage. 20 (2): 816–27. doi:10.1016/s1053-8119(03)00350-1. PMID 14568454.
  87. ^ Buchsbaum BR, Olsen RK, Koch P, Berman KF (November 2005). "Human dorsal and ventral auditory streams subserve rehearsal-based and echoic processes during verbal working memory". Neuron. 48 (4): 687–97. doi:10.1016/j.neuron.2005.09.029. PMID 16301183.
  88. ^ Scott BH, Mishkin M, Yin P (July 2012). "Monkeys have a limited form of short-term memory in audition". Proceedings of the National Academy of Sciences of the United States of America. 109 (30): 12237–41. doi:10.1073/pnas.1209685109. PMID 22778411.
  89. ^ Noppeney U, Patterson K, Tyler LK, Moss H, Stamatakis EA, Bright P, Mummery C, Price CJ (April 2007). "Temporal lobe lesions and semantic impairment: a comparison of herpes simplex virus encephalitis and semantic dementia". Brain. 130 (Pt 4): 1138–47. doi:10.1093/brain/awl344. PMID 17251241.
  90. ^ Patterson K, Nestor PJ, Rogers TT (December 2007). "Where do you know what you know? The representation of semantic knowledge in the human brain". Nature Reviews. Neuroscience. 8 (12): 976–87. doi:10.1038/nrn2277. PMID 18026167.
  91. ^ Schwartz MF, Kimberg DY, Walker GM, Faseyitan O, Brecher A, Dell GS, Coslett HB (December 2009). "Anterior temporal involvement in semantic word retrieval: voxel-based lesion-symptom mapping evidence from aphasia". Brain. 132 (Pt 12): 3411–27. doi:10.1093/brain/awp284. PMID 19942676.
  92. ^ Hamberger MJ, McClelland S, McKhann GM, Williams AC, Goodman RR (March 2007). "Distribution of auditory and visual naming sites in nonlesional temporal lobe epilepsy patients and patients with space-occupying temporal lobe lesions". Epilepsia. 48 (3): 531–8. doi:10.1111/j.1528-1167.2006.00955.x. PMID 17326797.
  93. ^ a b Duffau H (March 2008). "The anatomo-functional connectivity of language revisited. New insights provided by electrostimulation and tractography". Neuropsychologia. 46 (4): 927–34. doi:10.1016/j.neuropsychologia.2007.10.025. PMID 18093622.
  94. ^ Vigneau M, Beaucousin V, Hervé PY, Duffau H, Crivello F, Houdé O, Mazoyer B, Tzourio-Mazoyer N (May 2006). "Meta-analyzing left hemisphere language areas: phonology, semantics, and sentence processing". NeuroImage. 30 (4): 1414–32. doi:10.1016/j.neuroimage.2005.11.002. PMID 16413796.
  95. ^ a b Creutzfeldt O, Ojemann G, Lettich E (October 1989). "Neuronal activity in the human lateral temporal lobe. I. Responses to speech". Experimental Brain Research. 77 (3): 451–75. doi:10.1007/bf00249600. PMID 2806441.
  96. ^ Mazoyer BM, Tzourio N, Frak V, Syrota A, Murayama N, Levrier O, Salamon G, Dehaene S, Cohen L, Mehler J (October 1993). "The cortical representation of speech". Journal of Cognitive Neuroscience. 5 (4): 467–79. doi:10.1162/jocn.1993.5.4.467. PMID 23964919.
  97. ^ Humphries C, Love T, Swinney D, Hickok G (October 2005). "Response of anterior temporal cortex to syntactic and prosodic manipulations during sentence processing". Human Brain Mapping. 26 (2): 128–38. doi:10.1002/hbm.20148. PMID 15895428.
  98. ^ Humphries C, Willard K, Buchsbaum B, Hickok G (June 2001). "Role of anterior temporal cortex in auditory sentence comprehension: an fMRI study". NeuroReport. 12 (8): 1749–52. doi:10.1097/00001756-200106130-00046. PMID 11409752.
  99. ^ Vandenberghe R, Nobre AC, Price CJ (May 2002). "The response of left temporal cortex to sentences". Journal of Cognitive Neuroscience. 14 (4): 550–60. doi:10.1162/08989290260045800. PMID 12126497.
  100. ^ Friederici AD, Rüschemeyer SA, Hahne A, Fiebach CJ (February 2003). "The role of left inferior frontal and superior temporal cortex in sentence comprehension: localizing syntactic and semantic processes". Cerebral Cortex. 13 (2): 170–7. doi:10.1093/cercor/13.2.170. PMID 12507948.
  101. ^ Friederici AD, Rüschemeyer SA, Hahne A, Fiebach CJ (February 2003). "The role of left inferior frontal and superior temporal cortex in sentence comprehension: localizing syntactic and semantic processes". Cerebral Cortex. 13 (2): 170–7. doi:10.1016/j.neuroimage.2004.12.013. PMID 12507948.
  102. ^ Rogalsky C, Hickok G (April 2009). "Selective attention to semantic and syntactic features modulates sentence processing networks in anterior temporal cortex". Cerebral Cortex. 19 (4): 786–96. doi:10.1093/cercor/bhn126. PMC 2651476. PMID 18669589.
  103. ^ Pallier C, Devauchelle AD, Dehaene S (February 2011). "Cortical representation of the constituent structure of sentences". Proceedings of the National Academy of Sciences of the United States of America. 108 (6): 2522–7. doi:10.1073/pnas.1018711108. PMC 3038732. PMID 21224415.
  104. ^ Brennan J, Nir Y, Hasson U, Malach R, Heeger DJ, Pylkkänen L (February 2012). "Syntactic structure building in the anterior temporal lobe during natural story listening". Brain and Language. 120 (2): 163–73. doi:10.1016/j.bandl.2010.04.002. PMC 2947556. PMID 20472279.
  105. ^ Kotz SA, von Cramon DY, Friederici AD (October 2003). "Differentiation of syntactic processes in the left and right anterior temporal lobe: Event-related brain potential evidence from lesion patients". Brain and Language. 87 (1): 135–136. doi:10.1016/s0093-934x(03)00236-0.
  106. ^ Martin RC, Shelton JR, Yaffee LS (February 1994). "Language processing and working memory: Neuropsychological evidence for separate phonological and semantic capacities". Journal of Memory and Language. 33 (1): 83–111. doi:10.1006/jmla.1994.1005.
  107. ^ Magnusdottir S, Fillmore P, den Ouden DB, Hjaltason H, Rorden C, Kjartansson O, Bonilha L, Fridriksson J (October 2013). "Damage to left anterior temporal cortex predicts impairment of complex syntactic processing: a lesion-symptom mapping study". Human Brain Mapping. 34 (10): 2715–23. doi:10.1002/hbm.22096. PMID 22522937.
  108. ^ Bornkessel-Schlesewsky I, Schlesewsky M, Small SL, Rauschecker JP (March 2015). "Neurobiological roots of language in primate audition: common computational properties". Trends in Cognitive Sciences. 19 (3): 142–50. doi:10.1016/j.tics.2014.12.008. PMC 4348204. PMID 25600585.
  109. ^ Hickok G, Okada K, Barr W, Pa J, Rogalsky C, Donnelly K, Barde L, Grant A (December 2008). "Bilateral capacity for speech sound processing in auditory comprehension: evidence from Wada procedures". Brain and Language. 107 (3): 179–84. doi:10.1016/j.bandl.2008.09.006. PMID 18976806.
  110. ^ Zaidel E (September 1976). "Auditory Vocabulary of the Right Hemisphere Following Brain Bisection or Hemidecortication". Cortex. 12 (3): 191–211. doi:10.1016/s0010-9452(76)80001-9. ISSN 0010-9452.
  111. ^ Poeppel D (October 2001). "Pure word deafness and the bilateral processing of the speech code". Cognitive Science. 25 (5): 679–693. doi:10.1016/s0364-0213(01)00050-7.
  112. ^ Ulrich G (May 1978). "Interhemispheric functional relationships in auditory agnosia. An analysis of the preconditions and a conceptual model". Brain and Language. 5 (3): 286–300. doi:10.1016/0093-934x(78)90027-5. PMID 656899.
  113. ^ Benson DA, Hienz RD, Goldstein MH (August 1981). "Single-unit activity in the auditory cortex of monkeys actively localizing sound sources: spatial tuning and behavioral dependency". Brain Research. 219 (2): 249–67. doi:10.1016/0006-8993(81)90290-0. PMID 7260632.
  114. ^ Miller LM, Recanzone GH (April 2009). "Populations of auditory cortical neurons can accurately encode acoustic space across stimulus intensity". Proceedings of the National Academy of Sciences of the United States of America. 106 (14): 5931–5. doi:10.1073/pnas.0901023106. PMID 19321750.
  115. ^ a b c Tian B, Reser D, Durham A, Kustov A, Rauschecker JP (April 2001). "Functional specialization in rhesus monkey auditory cortex". Science. 292 (5515): 290–3. doi:10.1126/science.1058911. PMID 11303104.
  116. ^ Woods TM, Lopez SE, Long JH, Rahman JE, Recanzone GH (December 2006). "Effects of stimulus azimuth and intensity on the single-neuron activity in the auditory cortex of the alert macaque monkey". Journal of Neurophysiology. 96 (6): 3323–37. doi:10.1152/jn.00392.2006. PMID 16943318.
  117. ^ Clarke S, Bellmann A, Meuli RA, Assal G, Steck AJ (June 2000). "Auditory agnosia and auditory spatial deficits following left hemispheric lesions: evidence for distinct processing pathways". Neuropsychologia. 38 (6): 797–807. doi:10.1016/s0028-3932(99)00141-4. PMID 10689055.
  118. ^ Griffiths TD, Rees A, Witton C, Shakir RA, Henning GB, Green GG (October 1996). "Evidence for a sound movement area in the human cerebral cortex". Nature. 383 (6599): 425–7. doi:10.1038/383425a0. PMID 8837772.
  119. ^ Anourova I, Nikouline VV, Ilmoniemi RJ, Hotta J, Aronen HJ, Carlson S (December 2001). "Evidence for dissociation of spatial and nonspatial auditory information processing". NeuroImage. 14 (6): 1268–77. doi:10.1006/nimg.2001.0903. PMID 11707083.
  120. ^ Clarke S, Adriani M, Bellmann A (October 1998). "Distinct short-term memory systems for sound content and sound localization". NeuroReport. 9 (15): 3433–7. doi:10.1097/00001756-199810260-00018. PMID 9855294.
  121. ^ Ahveninen J, Jääskeläinen IP, Raij T, Bonmassar G, Devore S, Hämäläinen M, Levänen S, Lin FH, Sams M, Shinn-Cunningham BG, Witzel T, Belliveau JW (September 2006). "Task-modulated "what" and "where" pathways in human auditory cortex". Proceedings of the National Academy of Sciences of the United States of America. 103 (39): 14608–13. doi:10.1073/pnas.0510480103. PMID 16983092.
  122. ^ Alain C, Arnott SR, Hevenor S, Graham S, Grady CL (October 2001). ""What" and "where" in the human auditory system". Proceedings of the National Academy of Sciences of the United States of America. 98 (21): 12301–6. doi:10.1073/pnas.211209098. PMID 11572938.
  123. ^ Barrett DJ, Hall DA (August 2006). "Response preferences for "what" and "where" in human non-primary auditory cortex". NeuroImage. 32 (2): 968–77. doi:10.1016/j.neuroimage.2006.03.050. PMID 16733092.
  124. ^ De Santis L, Clarke S, Murray MM (January 2007). "Automatic and intrinsic auditory "what" and "where" processing in humans revealed by electrical neuroimaging". Cerebral Cortex. 17 (1): 9–17. doi:10.1093/cercor/bhj119. PMID 16421326.
  125. ^ a b Warren JD, Griffiths TD (July 2003). "Distinct mechanisms for processing spatial sequences and pitch sequences in the human auditory brain". The Journal of Neuroscience. 23 (13): 5799–804. doi:10.1523/jneurosci.23-13-05799.2003. PMID 12843284.
  126. ^ Hart HC, Palmer AR, Hall DA (March 2004). "Different areas of human non-primary auditory cortex are activated by sounds with spatial and nonspatial properties". Human Brain Mapping. 21 (3): 178–90. doi:10.1002/hbm.10156. PMID 14755837.
  127. ^ Warren JD, Zielinski BA, Green GG, Rauschecker JP, Griffiths TD (March 2002). "Perception of sound-source motion by the human brain". Neuron. 34 (1): 139–48. doi:10.1016/s0896-6273(02)00637-2. PMID 11931748.
  128. ^ Baumgart F, Gaschler-Markefski B, Woldorff MG, Heinze HJ, Scheich H (August 1999). "A movement-sensitive area in auditory cortex". Nature. 400 (6746): 724–6. doi:10.1038/23390. PMID 10466721.
  129. ^ Krumbholz K, Schönwiesner M, Rübsamen R, Zilles K, Fink GR, von Cramon DY (January 2005). "Hierarchical processing of sound location and motion in the human brainstem and planum temporale". The European Journal of Neuroscience. 21 (1): 230–8. doi:10.1111/j.1460-9568.2004.03836.x. PMID 15654860.
  130. ^ Pavani F, Macaluso E, Warren JD, Driver J, Griffiths TD (September 2002). "A common cortical substrate activated by horizontal and vertical sound movement in the human brain". Current Biology. 12 (18): 1584–90. doi:10.1016/s0960-9822(02)01143-0. PMID 12372250.
  131. ^ Tata MS, Ward LM (December 2005). "Early phase of spatial mismatch negativity is localized to a posterior "where" auditory pathway". Experimental Brain Research. 167 (3): 481–6. doi:10.1007/s00221-005-0183-y. PMID 16283399.
  132. ^ Tata MS, Ward LM (January 2005). "Spatial attention modulates activity in a posterior "where" auditory pathway". Neuropsychologia. 43 (4): 509–16. doi:10.1016/j.neuropsychologia.2004.07.019. PMID 15716141.
  133. ^ Brunetti M, Belardinelli P, Caulo M, Del Gratta C, Della Penna S, Ferretti A, Lucci G, Moretti A, Pizzella V, Tartaro A, Torquati K, Olivetti Belardinelli M, Romani GL (December 2005). "Human brain activation during passive listening to sounds from different locations: an fMRI and MEG study". Human Brain Mapping. 26 (4): 251–61. doi:10.1002/hbm.20164. PMID 15954141.
  134. ^ Lutzenberger W, Ripper B, Busse L, Birbaumer N, Kaiser J (July 2002). "Dynamics of gamma-band activity during an audiospatial working memory task in humans". The Journal of Neuroscience. 22 (13): 5630–8. doi:10.1523/jneurosci.22-13-05630.2002. PMID 12097514.
  135. ^ Gottlieb Y, Vaadia E, Abeles M (1989). "Single unit activity in the auditory cortex of a monkey performing a short term memory task". Experimental Brain Research. 74 (1): 139–48. doi:10.1007/bf00248287. PMID 2924831.
  136. ^ a b Linden JF, Grunewald A, Andersen RA (July 1999). "Responses to auditory stimuli in macaque lateral intraparietal area. II. Behavioral modulation". Journal of Neurophysiology. 82 (1): 343–58. doi:10.1152/jn.1999.82.1.343. PMID 10400963.
  137. ^ a b Mazzoni P, Bracewell RM, Barash S, Andersen RA (March 1996). "Spatially tuned auditory responses in area LIP of macaques performing delayed memory saccades to acoustic targets". Journal of Neurophysiology. 75 (3): 1233–41. doi:10.1152/jn.1996.75.3.1233. PMID 8867131.
  138. ^ a b Stricanne B, Andersen RA, Mazzoni P (September 1996). "Eye-centered, head-centered, and intermediate coding of remembered sound locations in area LIP". Journal of Neurophysiology. 76 (3): 2071–6. doi:10.1152/jn.1996.76.3.2071. PMID 8890315.
  139. ^ Mullette-Gillman OA, Cohen YE, Groh JM (October 2005). "Eye-centered, head-centered, and complex coding of visual and auditory targets in the intraparietal sulcus". Journal of Neurophysiology. 94 (4): 2331–52. doi:10.1152/jn.00021.2005. PMID 15843485.
  140. ^ Russo GS, Bruce CJ (March 1994). "Frontal eye field activity preceding aurally guided saccades". Journal of Neurophysiology. 71 (3): 1250–3. doi:10.1152/jn.1994.71.3.1250. PMID 8201415.
  141. ^ Vaadia E, Benson DA, Hienz RD, Goldstein MH (October 1986). "Unit study of monkey frontal cortex: active localization of auditory and of visual stimuli". Journal of Neurophysiology. 56 (4): 934–52. doi:10.1152/jn.1986.56.4.934. PMID 3783237.
  142. ^ Recanzone GH (December 2008). "Representation of con-specific vocalizations in the core and belt areas of the auditory cortex in the alert macaque monkey". The Journal of Neuroscience. 28 (49): 13184–93. doi:10.1523/jneurosci.3619-08.2008. PMID 19052209.
  143. ^ Gifford GW, Cohen YE (May 2005). "Spatial and non-spatial auditory processing in the lateral intraparietal area". Experimental Brain Research. 162 (4): 509–12. doi:10.1007/s00221-005-2220-2. PMID 15864568.
  144. ^ Gil-da-Costa R, Martin A, Lopes MA, Muñoz M, Fritz JB, Braun AR (August 2006). "Species-specific calls activate homologs of Broca's and Wernicke's areas in the macaque". Nature Neuroscience. 9 (8): 1064–70. doi:10.1038/nn1741. PMID 16862150.
  145. ^ Remedios R, Logothetis NK, Kayser C (January 2009). "An auditory region in the primate insular cortex responding preferentially to vocal communication sounds". The Journal of Neuroscience. 29 (4): 1034–45. doi:10.1523/jneurosci.4089-08.2009. PMID 19176812.
  146. ^ a b Andics A, Gácsi M, Faragó T, Kis A, Miklósi A (March 2014). "Voice-sensitive regions in the dog and human brain are revealed by comparative fMRI". Current Biology. 24 (5): 574–8. doi:10.1016/j.cub.2014.01.058. PMID 24560578.
  147. ^ Formisano E, De Martino F, Bonte M, Goebel R (November 2008). ""Who" is saying "what"? Brain-based decoding of human voice and speech". Science. 322 (5903): 970–3. doi:10.1126/science.1164318. PMID 18988858.
  148. ^ Warren JD, Scott SK, Price CJ, Griffiths TD (July 2006). "Human brain mechanisms for the early analysis of voices". NeuroImage. 31 (3): 1389–97. doi:10.1016/j.neuroimage.2006.01.034. PMID 16540351.
  149. ^ Jardri R, Houfflin-Debarge V, Delion P, Pruvo JP, Thomas P, Pins D (April 2012). "Assessing fetal response to maternal speech using a noninvasive functional brain imaging technique". International Journal of Developmental Neuroscience. 30 (2): 159–61. doi:10.1016/j.ijdevneu.2011.11.002. PMID 22123457.
  150. ^ Nakamura K, Kawashima R, Sugiura M, Kato T, Nakamura A, Hatano K, Nagumo S, Kubota K, Fukuda H, Ito K, Kojima S (January 2001). "Neural substrates for recognition of familiar voices: a PET study". Neuropsychologia. 39 (10): 1047–54. doi:10.1016/s0028-3932(01)00037-9. PMID 11440757.
  151. ^ Efron R, Crandall PH (July 1983). "Central auditory processing. II. Effects of anterior temporal lobectomy". Brain and Language. 19 (2): 237–53. doi:10.1016/0093-934x(83)90068-8. PMID 6883071.
  152. ^ Smith KR, Hsieh IH, Saberi K, Hickok G (April 2010). "Auditory spatial and object processing in the human planum temporale: no evidence for selectivity". Journal of Cognitive Neuroscience. 22 (4): 632–9. doi:10.1162/jocn.2009.21196. PMID 19301992.
  153. ^ Zatorre RJ, Bouffard M, Ahad P, Belin P (September 2002). "Where is 'where' in the human auditory cortex?". Nature Neuroscience. 5 (9): 905–9. doi:10.1038/nn904. PMID 12195426.
  154. ^ a b Turkeltaub PE, Coslett HB (July 2010). "Localization of sublexical speech perception components". Brain and Language. 114 (1): 1–15. doi:10.1016/j.bandl.2010.03.008. PMC 2914564. PMID 20413149.
  155. ^ Chang EF, Rieger JW, Johnson K, Berger MS, Barbaro NM, Knight RT (November 2010). "Categorical speech representation in human superior temporal gyrus". Nature Neuroscience. 13 (11): 1428–32. doi:10.1038/nn.2641. PMC 2967728. PMID 20890293.
  156. ^ Nath AR, Beauchamp MS (January 2012). "A neural basis for interindividual differences in the McGurk effect, a multisensory speech illusion". NeuroImage. 59 (1): 781–7. doi:10.1016/j.neuroimage.2011.07.024. PMC 3196040. PMID 21787869.
  157. ^ Beauchamp MS, Nath AR, Pasalar S (February 2010). "fMRI-Guided transcranial magnetic stimulation reveals that the superior temporal sulcus is a cortical locus of the McGurk effect". The Journal of Neuroscience. 30 (7): 2414–7. doi:10.1523/JNEUROSCI.4865-09.2010. PMC 2844713. PMID 20164324.
  158. ^ McGettigan C, Faulkner A, Altarelli I, Obleser J, Baverstock H, Scott SK (April 2012). "Speech comprehension aided by multiple modalities: behavioural and neural interactions". Neuropsychologia. 50 (5): 762–76. doi:10.1016/j.neuropsychologia.2012.01.010. PMC 4050300. PMID 22266262.
  159. ^ Stevenson RA, James TW (February 2009). "Audiovisual integration in human superior temporal sulcus: Inverse effectiveness and the neural processing of speech and object recognition". NeuroImage. 44 (3): 1210–23. doi:10.1016/j.neuroimage.2008.09.034. PMID 18973818.
  160. ^ Bernstein LE, Jiang J, Pantazis D, Lu ZL, Joshi A (October 2011). "Visual phonetic processing localized using speech and nonspeech face gestures in video and point-light displays". Human Brain Mapping. 32 (10): 1660–76. doi:10.1002/hbm.21139. PMC 3120928. PMID 20853377.
  161. ^ Campbell R (March 2008). "The processing of audio-visual speech: empirical and neural bases". Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences. 363 (1493): 1001–10. doi:10.1098/rstb.2007.2155. PMC 2606792. PMID 17827105.
  162. ^ Schwartz MF, Faseyitan O, Kim J, Coslett HB (December 2012). "The dorsal stream contribution to phonological retrieval in object naming". Brain. 135 (Pt 12): 3799–814. doi:10.1093/brain/aws300. PMC 3525060. PMID 23171662.
  163. ^ Schwartz MF, Kimberg DY, Walker GM, Faseyitan O, Brecher A, Dell GS, Coslett HB (December 2009). "Anterior temporal involvement in semantic word retrieval: voxel-based lesion-symptom mapping evidence from aphasia". Brain. 132 (Pt 12): 3411–27. doi:10.1093/brain/awp284. PMC 2792374. PMID 19942676.
  164. ^ Ojemann GA (June 1983). "Brain organization for language from the perspective of electrical stimulation mapping". Behavioral and Brain Sciences. 6 (2): 189–206. doi:10.1017/S0140525X00015491. ISSN 1469-1825.
  165. ^ Cornelissen K, Laine M, Renvall K, Saarinen T, Martin N, Salmelin R (June 2004). "Learning new names for new objects: cortical effects as measured by magnetoencephalography". Brain and Language. 89 (3): 617–22. doi:10.1016/j.bandl.2003.12.007. PMID 15120553.
  166. ^ Hartwigsen G, Baumgaertner A, Price CJ, Koehnke M, Ulmer S, Siebner HR (September 2010). "Phonological decisions require both the left and right supramarginal gyri". Proceedings of the National Academy of Sciences of the United States of America. 107 (38): 16494–9. doi:10.1073/pnas.1008121107. PMC 2944751. PMID 20807747.
  167. ^ Cornelissen K, Laine M, Tarkiainen A, Järvensivu T, Martin N, Salmelin R (April 2003). "Adult brain plasticity elicited by anomia treatment". Journal of Cognitive Neuroscience. 15 (3): 444–61. doi:10.1162/089892903321593153. PMID 12729495.
  168. ^ Mechelli A, Crinion JT, Noppeney U, O'Doherty J, Ashburner J, Frackowiak RS, Price CJ (October 2004). "Neurolinguistics: structural plasticity in the bilingual brain". Nature. 431 (7010): 757. doi:10.1038/431757a. PMID 15483594.
  169. ^ Green DW, Crinion J, Price CJ (July 2007). "Exploring cross-linguistic vocabulary effects on brain structures using voxel-based morphometry". Bilingualism. 10 (2): 189–199. doi:10.1017/S1366728907002933. PMID 18418473.
  170. ^ Willness C (2016-01-08). "The Oxford handbook of organizational climate and culture By Benjamin Schneider & Karen M. Barbera (Eds.) New York, NY: Oxford University Press, 2014. ISBN 978-0-19-986071-5". Book Reviews. British Journal of Psychology. 107 (1): 201–202. doi:10.1111/bjop.12170.
  171. ^ Lee H, Devlin JT, Shakeshaft C, Stewart LH, Brennan A, Glensman J, Pitcher K, Crinion J, Mechelli A, Frackowiak RS, Green DW, Price CJ (January 2007). "Anatomical traces of vocabulary acquisition in the adolescent brain". The Journal of Neuroscience. 27 (5): 1184–9. doi:10.1523/JNEUROSCI.4442-06.2007. PMID 17267574.
  172. ^ Richardson FM, Thomas MS, Filippi R, Harth H, Price CJ (May 2010). "Contrasting effects of vocabulary knowledge on temporal and parietal brain structure across lifespan". Journal of Cognitive Neuroscience. 22 (5): 943–54. doi:10.1162/jocn.2009.21238. PMC 2860571. PMID 19366285.
  173. ^ Jobard G, Crivello F, Tzourio-Mazoyer N (October 2003). "Evaluation of the dual route theory of reading: a metanalysis of 35 neuroimaging studies". NeuroImage. 20 (2): 693–712. doi:10.1016/s1053-8119(03)00343-4. PMID 14568445.
  174. ^ Bolger DJ, Perfetti CA, Schneider W (May 2005). "Cross-cultural effect on the brain revisited: universal structures plus writing system variation". Human Brain Mapping (in French). 25 (1): 92–104. doi:10.1002/hbm.20124. PMID 15846818.
  175. ^ Brambati SM, Ogar J, Neuhaus J, Miller BL, Gorno-Tempini ML (July 2009). "Reading disorders in primary progressive aphasia: a behavioral and neuroimaging study". Neuropsychologia. 47 (8–9): 1893–900. doi:10.1016/j.neuropsychologia.2009.02.033. PMC 2734967. PMID 19428421.
  176. ^ a b Baddeley A, Lewis V, Vallar G (May 1984). "Exploring the Articulatory Loop". The Quarterly Journal of Experimental Psychology Section A. 36 (2): 233–252. doi:10.1080/14640748408402157.
  177. ^ a b Cowan N (February 2001). "The magical number 4 in short-term memory: a reconsideration of mental storage capacity". The Behavioral and Brain Sciences. 24 (1): 87–114, discussion 114–85. doi:10.1017/S0140525X01003922. PMID 11515286.
  178. ^ Caplan D, Rochon E, Waters GS (August 1992). "Articulatory and phonological determinants of word length effects in span tasks". The Quarterly Journal of Experimental Psychology. A, Human Experimental Psychology. 45 (2): 177–92. doi:10.1080/14640749208401323. PMID 1410554.
  179. ^ Waters GS, Rochon E, Caplan D (February 1992). "The role of high-level speech planning in rehearsal: Evidence from patients with apraxia of speech". Journal of Memory and Language. 31 (1): 54–73. doi:10.1016/0749-596x(92)90005-i.
  180. ^ Cohen L, Bachoud-Levi AC (September 1995). "The role of the output phonological buffer in the control of speech timing: a single case study". Cortex; A Journal Devoted to the Study of the Nervous System and Behavior. 31 (3): 469–86. doi:10.1016/s0010-9452(13)80060-3. PMID 8536476.
  181. ^ Shallice T, Rumiati RI, Zadini A (September 2000). "The selective impairment of the phonological output buffer". Cognitive Neuropsychology. 17 (6): 517–46. doi:10.1080/02643290050110638. PMID 20945193.
  182. ^ Shu H, Xiong H, Han Z, Bi Y, Bai X (2005). "The selective impairment of the phonological output buffer: evidence from a Chinese patient". Behavioural Neurology. 16 (2–3): 179–89. doi:10.1155/2005/647871. PMC 5478832. PMID 16410633.
  183. ^ Oberauer K (2002). "Access to information in working memory: Exploring the focus of attention". Journal of Experimental Psychology: Learning, Memory, and Cognition. 28 (3): 411–421. doi:10.1037/0278-7393.28.3.411.
  184. ^ Unsworth N, Engle RW (January 2007). "The nature of individual differences in working memory capacity: active maintenance in primary memory and controlled search from secondary memory". Psychological Review. 114 (1): 104–32. doi:10.1037/0033-295x.114.1.104. PMID 17227183.
  185. ^ Barrouillet P, Camos V (December 2012). "As Time Goes By". Current Directions in Psychological Science. 21 (6): 413–419. doi:10.1177/0963721412459513.
  186. ^ Bornkessel-Schlesewsky I, Schlesewsky M, Small SL, Rauschecker JP (March 2015). "Neurobiological roots of language in primate audition: common computational properties". Trends in Cognitive Sciences. 19 (3): 142–50. doi:10.1016/j.tics.2014.12.008. PMC 4348204. PMID 25600585.
  187. ^ Buchsbaum BR, D'Esposito M (May 2008). "The search for the phonological store: from loop to convolution". Journal of Cognitive Neuroscience. 20 (5): 762–78. doi:10.1162/jocn.2008.20501. PMID 18201133.