Jump to content

User:Cognition language and thought

From Wikipedia, the free encyclopedia

Lecture 1

[edit]

Language seems effortless for unimpaired adult L1 speakers, and much of our knowledge is implicit.

  • Need to study breakdown, limits (demanding, speeded tasks), & errors to understand cognitive processes & representations involved in language use
  • Observe & measure natural language behaviour, use experimental tasks to control for factors not under investigation, stud impairment, L2 abilities, development

Areas of linguistics

[edit]

Language is complicated. There’s a lot happening at once.

  • Multiple levels of language processing and representation

Lecture 2

[edit]

Age and abilities

[edit]
  • 0;4 – cries, gurgles, smiles
  • 0;7 – da-da-da
  • 2;3 – Big drum.
  • 2;6 – What that egg doing?
  • 2;10 – I simply don’t want put in chair.
  • 3;2 – Can I keep the screwdriver just like the carpenter keep a screwdriver?
  • 4;0 - reciting Shakespeare

Understanding language acquisition - broader implications

[edit]
  • Central role in debate over how the mind works
    • Degree of uniqueness of human cognition
    • Interaction between innate and environmental influences
    • Domain-specific modules (a separate LAD) or general cognitive processes?
    • To what extent
      • do we make communicative use of a system with certain properties? versus
      • does the system acquire these

Why is language acquisition complicated?

[edit]

1. multiple levels of language processing and representation

    • See: Areas of linguistics in lecture 1.

Understanding a sentence

  • Perceive & identify speech sounds
  • Locate the word boundaries
  • Recognise the words
  • Access the meanings of the words
  • Integrate the meanings of words into a whole
  • Evaluate the whole meaning in the context of the situation and the conversation

Producing a sentence

  • Form a concept you want to communicate
  • Decide how to express it for the particular person you’re speaking to at that particular point in the conversation
  • Retrieve the words that will express this concept in this way
  • Order them according to syntactic rules
  • Compute the motor commands for producing these words
  • Monitor whether you’re making any speech errors

Info on Levelt's Language Production Model: [1]

2. Language is Symbolic

  • words stand for things, including abstract concepts as well as concrete objects
  • Problem: How do children learn which object or concept a word stands for when the same object or concept can have different labels

Words revisited

[edit]

Words stand for things, and we have to agree on the form of the word (written &/or spoken) AND the range of things it stands for, or we can’t communicate precisely with them.

  • Nearly died of fever (probably meningitis or scarlet fever) aged1;7 that left her deaf and blind
  • Old enough to remember experiences when learning language
  • Learned language by finger spelling from 6;10, a pioneering technique at the time.
  • Sparks critical period debate.

Productivity/generativity of language: the ability to make infinite combinations (phrases, sentences) from a finite repertoire of individual units (words and sounds).

Language vs Communication

[edit]

A natural language:

  • Has a grammar (structural regularities, “rules”)
  • Productive (infinite combinations possible within these rules)
  • Arbitrary (no necessary relationship between the form of a language unit and what it refers)
  • Discrete (composed of parts that are arranged hierarchically e.g. Weiten Fig 8.1)

Communication system:

  • No grammar, not productive
  • Is symbolic: relies on symbols with a meaning that is known by both communicator and recipient

The learnability problem

[edit]

Children get lots of positive evidence: evidence about which sentences are possible.

  • Despite all the positive evidence they still produce sentence they would have never heard and ungrammatical sentences.

They get little negative evidence: information about which strings of words are not grammatical sentences

  • Parents rarely try to correct kids grammar, when they do it is futile.
  • Childrens' utterances are reinforced by parents based on meaning not syntax.

Brown & Hanlon (1970)

[edit]

Investigated database of 3 kids’ language at 2.5 years and 4.5 years • No correlation between grammaticality & parents’/grandparents’ approval or disapproval No correlation between grammaticality of questions and whether parents understood or misunderstood • Parents respond to correctness of content of utterance, not correctness of syntactic structure

Some parents repeat ungrammatical sentences more often (many studies, cited in Bloom, 1994), BUT

  • Only some parents
  • Only for younger children
  • Some cultures do not have extensive parent-child interaction with opportunity for adult feedback
  • In the end it all seems to have no baring on how well children learn language

In order to give an adequate account of how language is learned, theories of language acquisition need to account for – The speed with which children acquire language – The sophistication of children’s language abilities – The type of input they receive

Theoretical approaches

[edit]
  • Environmental learning theories
  • Nativist theories
  • Cognitive Interactionist theories
  • Functionalist theories

Environmental learning theories

[edit]
  • Early: Skinnerian operant learning approach stresses role of imitation & reward
  • Argument demolished by Chomsky
  • Currently: influenced by Bandura’s social cognitive model and concept of observational learning. Counter nativist arguments with evidence that adults adapt language to facilitate child’s learning, e.g. infant-directed speech

Nativist theories

[edit]

e.g. Chomsky, Pinker

  • Language must be an innate predisposition
  • Language development reflects maturation (knowledge of language like an “organ” that develops within the brain - this organ is called a “language acquisition device” or LAD)
  • A response to the “learnability problem:
    • no systematic exposure to incorrect as well as correct forms to extract rules
    • no explicit instruction about meanings or rules
    • no systematic negative feedback when they produce incorrect language
  • “If the world is not telling children to stop, something in their brains is” (Pinker, 1990, p. 205)
  • Innate brain mechanisms for learning language distinct from those underlying other cognitive tasks; debate about what information provided innately by that organ

Cognitive Interactionist Theories

[edit]
  • Language acquisition reflects general cognitive capabilities that also contribute to other skills
  • Language learning supported by social context; children observe and engage in social exchanges that build a functional communication system
  • Children do not learn abstract grammatical rules; they learn concepts and functions that occur in the world (eg agent, action, object etc) and then learn how to describe these in language
  • Emphasise that children use their early knowledge of the world to “crack the code of speech”

Functionalist theories

[edit]
  • combination of cognitive and environmental
  • Bruner: emphasises that children learn TO USE language through interactions with the world and other people
  • depends on 2 forces

Lecture 3

[edit]

Investigating infant language

[edit]
  • Observational data – Diary studies (over-represent children of linguists & psycholinguists!!)
  • Experimental data - Babies get interested in things, bored with things, and remember things: allows
    • novelty/habituation procedures: Infants become habituated to familiar stimulus. With change of stimulus, dishabituation shows responsive to change
    • contingent reinforcement procedures: Teach child to make response to receive stimulus (suck, look, kick). Measure novelty for child in intensity/duration of response

High Amplitude Sucking (HAS) procedure

[edit]
  • Blind (non-nutritive) nipple
  • Determine baseline sucking rate
  • Sound stimulus presented contingent on sufficiently high sucking rate
  • Baby sucks hard if interested to hear stimulus; sucks more slowly as bored (habituation)
  • Once habituation occurs, new stimulus presented: HAS at habituation rate = no discrimination; increase HAS = discrimination
  • Difference in sucking rate indexes sensitivity to differences in stimulus

Limitations

[edit]
  • Can be hard to interpret no change - baby uninterested in sound? Uninterested in procedure? Falling asleep?
  • Babies not very interested in sucking blind nipples after 4 months

Kicking mobiles

[edit]

Kick more if stimulus is known/remembered ; kick more slowly or stop kicking if new

Eye gaze

[edit]

Longer looking if stimulus is new; shorter gaze times once habituated

Head-turn techniques

[edit]

Baby can turn head to look at either of two monitors. Which one baby looks at tells you which s/he comprehends.

Or baby learns change in language-related stimulus paired with other interesting stimulus (e.g. dancing monkey). Perceives change if turns to look for monkey.

Phonological knowledge

[edit]
  • Ability “to distinguish and produce the sound patterns of the adult language” (Vihman, 1988, p. 61)
  • babies must be able to perceive the sounds language is constructed from

Indications of auditory processing in utero

[edit]
  • Fetal heart rate tests (rises to mother’s voice, falls to stranger)
  • Newborns < 24 hours old prefer their mother’s voice to another female (DeCaspar & Fifer, 1980) (HAS) – DeCaspar & Spence (1986) (HAS)
    • 2 groups of pregnant women, 1 group reads passage aloud every day for last 6 weeks of pregnancy
    • Passages read by mums and other women. Babies of reading group preferred this passage to another passage, regardless of reader. Babies of non-reading group showed no preference for one passage.
  • Babies born to French mothers higher HAS rate when listening to French than to Russian
  • Babies born to Russian mothers showed opposite pattern
  • Babies born to mothers in other language communities showed no preference between French & Russian
  • French infants no preference for English vs Italian
  • Speech filtered to leave only prosodic information showed same results. Womb filters speaker-specific and phonemic information to leave prosody?
  • Result disappeared when speech played backwards

There is linguistic prosody (e.g. raised ptch to indicate asking a question) and emotional prosody (e.g. broken voice, sad voice quality when someone is sad)

Phonetic information

[edit]

As opposed to phonological information.

The International Phonetic Alphabet, baby! Captures things like place of articulation, manner of articulation, voicing.

  • Phonetic difficulty influences order of acquisition (e.g. [b] at <2 years, versus [th] in "the" at 5+ years)

Distinguishes phonetic differences that are meaning relevant in a particular language

  • some very marked phonetic differences are not phonemic

eg Oz/US/NZ dialects

  • some phonetic distinctions are phonemic in some languages but not others eg r/l, ph/p, tone, mb/b
  • languages vary in number of phonemes eg English=45; Polynesian=11; Khoisan=141
  • alphabetic orthographies make phonemes (relatively) explicit BUT speech stream doesn’t
  • Tone is phonemic in some languages (e.g. Chinese languages, Thai)
  • Unlike visual perception, listening to speech we hear distinct categories even when the changes are gradient (especially for consonants) !!

Speech spectrogram: shows loudness of all frequencies over time. Formants are the frequencies amplified by the shape of the mouth, so appear as dark tracks.

Formants: prominent resonances in voiced sound (voiced sound produced by vibrating the vocal folds)

Formant Transition: Slope of formant at word onset. Indicates place of Articulation: b vs. d vs. g.

Speech waveform: shows changing loudness over time

Categorical perception in adults

[edit]
  • adults do not perceive continuous variation
  • They perceive sound as one phoneme to a

particular point, then perceive the other

Categorical perception of ba vs pa

[edit]
  • the phonemes /b/ and /p/ differ only in "voice onset time" (VOT = the time at which voicing starts at the larynx relative to the release of the mouth closure)
  • /”mb”/ VOT = -70 msec (prevoiced) is phonemic in Spanish but not English. English listeners hear [b] and [mb] as /b/ (phonetic difference is not phonemic)
  • Spanish listeners hear [mb] and [b] as two different phonemes (phonetic difference is phonemic)

In English

  • /b/ = Short VOT (eg 20 msec)
  • /p/ = Longer VOT (eg 80 msec)

Do infants show this categorical perception?

[edit]
  1. Eimas, Siqueland, Jusczyk & Vigorito (1971) tested High Amplitude Sucking to a /b/ (typical VOT=20msec).
  2. Infant (1 and 4 months) habituates
  3. The stimulus is changed to: a /b/ with a (0 ms) VOT
  4. Continued habituation. Infant isn’t interested in 20 ms change within category.
  5. The stimulus is changed to: a /p/ with VOT 40 msec (same change: 20 ms).
  6. Infant dishabituates: very interested in 20 ms change across category
  7. Eimas et al. (1971) Results – discriminated between-category but not within-category ba/pa contrasts at same VOTs as adults

Babies show sensitivity to other consonant and vowel contrasts at 2-4 months in their first language

Categorical perception of non-native contrasts in infants

[edit]

Werker & Tees (1984)

  • /ba/ vs /da/ (English)
  • Dental and retroflex stops (Hindi)
  • [k'i]-[q'i] (Nthlakapmx

clicks) • Babies of all nationalities at 6-8 mo can discriminate all these contrasts

Categorical perception in infants • young infants can discriminate phonetic contrasts from both their own language and languages they have not been exposed to • they are “universal phoneticians”. They can discriminate essentially all the sound contrasts that languages make use of (Hoff- Ginsberg, 1997, p. 50) • consistent with an innate ability to make language-relevant distinctions e.g. between /b/ and /p/ but not between two different /b/ tokens (nativist)

Categorical perception in infants Question: Is this ability species specific? • NO: chinchillas (auditory system similar to ours) show a very similar sensitivity to the voicing distinction (Kuhl & Miller, 1978), rhesus monkeys can do some categorical perception tasks the way humans do Question: Is babies’ sensitivity speechspecific? • NO: non-speech stimuli with similar onset distinctions show categorical perception in both adults and children (Jusczyk et al., 1980)

Categorical perception in infants • Conclusion: not an innate “phoneme categorisation system” (because other species that don’t use phonemes can do it) but a perceptual system with certain regions of sensitivity and insensitivity e.g. tuned to detect differences in VOT at 20-40 ms but not 0-20 ms

Influence of ambient language Phonetic categorisation is strongly environmentally influenced • by 10-12 months can only discriminate phonemes in their native language (Werker & Tees, 1984) → I.e. infants acquire phoneme system of their parents (ambient language) • Experience with target language reduces ability to perceive unused contrast (they stop being universal phoneticians) • Adult perception of speech sounds in other languages depends on overlap with first language perception system (Best, 1994)

How do infants learn to discriminate phonemes? • Auditory theories – speech perception uses same auditory information used for non-speech Question: are infants only sensitive to auditory differences? – NO: they categorise different speaker tokens as same phoneme – E.g. at 2 months discriminate bug/dug even when all training instances are different speakers (Jusczyk et al 1992) – Instance-specific auditory information being mapped to a more general phonemic representation

How do infants learn to discriminate phonemes? • Motor theory of speech perception – specialised speech perception system based on production species-specific sensitivity to articulatory gestures of speech retrieved from the auditory signal – predicts that speech perception should be influenced by visual as well as auditory information (visual information alsoprovides information about articulator movement) – McGurk effect shows both adults’ and infants’ perception integrates auditory and visual information (pracs)

How do infants learn to discriminate phonemes? Statistical learning of categories • Hypothesis: infants keep a record of number of tokens at each value. Learn distributions as categories. (Miller, 1998; McMurray & Aslin, 2005) • Infant-directed speech may assist with this

Murray’s diagram of how bimodal distribution is categorised into separate phonemes Auditory tuning + tokens available in ambient speech allows learning of categories

Actual VOT distributions from one speaker - unusually long because he has an articulatory disorder, (Croot et al. 1999)