Jump to content

Wikipedia:United States Education Program/Courses/Psychology of Language (Kyle Chambers)/Summaries

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Kechambers (talk | contribs) at 03:17, 5 March 2012 (Corrected formatting of page). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Template:Course page/Tabs Please add your 500 word summaries in the appropriate section below. Include the citation information for the article. Each student should summarize a different article, so once you have chosen an article, I would recommend adding the citation with your name (type 4 tildes). That way others will not choose the same article as you. You can then come back later and add your summary.

Speech Perception

______________________________________________

The Development of Phonemic Categorization in Children Aged 6-12 by Valerie Hazan and Sarah Barrett Lkientzle (talk) 15:38, 29 February 2012 (UTC)[reply]

In 2000, Hazan and Barrett sought to find evidence for development of phonemic categorization in children aged 6 to 12 and compare this to adult subjects. They wanted to test whether categorization is more consistent with dynamic or static cues, as well as how this changes if there are several cues available versus limited cues to signal the phonemic differences. For example, how well can a child distinguish /d/-/g/ and /s/-/z/ depending on the cues given, and how does this compare to how an adult does this task? The reason why this study was so important was because previous research had yielded contradictory results for the age in which children’s perception of phonemic categorization is at an adult level, and criteria and methods for testing this were inconsistent. This study is also important because it provides evidence that phoneme boundary sharpening is still developing well after the age of 12 into adulthood.

Previous research has repeatedly shown a developmental trend that as children grow older, they can better categorize phonemes into their respective categories consistently. The issue of at what age phonemic categorization becomes adult-like is still debatable, though. Some studies have not found significant differences between 7 year olds and adults in their ability to categorize (Sussman & Carney, 1989). Other studies have found the opposite results (significant differences between age groups) with virtually the same criteria (Flege & Eefting, 1986). The present study by Hazan and Barrett sought to re-evaluate these previous findings in a manner that was very controlled, and see if 12 year olds (the oldest of their participant pool, next to their adult control group) were performing at the level of adults signifying the end of this developmental growth.

The test was run with 84 child subjects, aged 6-12, and with 13 adult subjects that served as a control group. Each subject was run separately, and had to complete a two-alternative forced-choice identification procedure that contained different synthesized phoneme sounds. These phoneme sounds were presented on a continuum starting from one sound (/d/) to another (/g/). When a participant had accurately identified at least 75% of a correct phoneme; the next sound on the continuum was presented. This outline was adapted for four different test conditions that each tested different phoneme continuums (such as /s/-/z/) and either was presented as a “single cue” or “combined cue”.

The dependent variables of this study were the categories chosen by participants for the sounds they heard. The independent variables were then the different conditions such as: which phoneme continuum was used and if it was single or combined cue presentation. The combined cue condition varied from a typical presentation of the sounds by varying the contrasting cues by harmony.

This study found that, like their hypothesis presumed, children continue to develop their ability to categorize phonemes as they age, and this continues even after the child turns 12. The researchers controlled for any extraneous variables such as attention deficits in the children, language barriers, or hearing deficits as well. Previous research on young children have shown that humans are proficient at identifying categories by the age of three, but the present study indicates that this ability only grows with age, and becomes more competent with ambiguous-cue situations. The study, therefore, states that there is no reason to presume a child is as competent as an adult at making these distinctions by the age of 12 like some previous research had suggested.

This research is important because it indicates that although we seem to be born with an innate sense of how to process phonemes, and by an early age are quite good at it, we should not assume that a persons environment does not aid in the development of even more advanced capabilities in perception. It seems that we can “practice” this distinction and get better at it by being exposed to more instances that make us figure out how to categorize sounds to make sense of them.

___________________________________________________________________________

The Role of Audition in Infant Babbling by D. Kimbrough Oiler and Rebecca E. Euers Amf14 (talk) 16:37, 21 February 2012 (UTC)[reply]

A number of questions have been raised about the importance of experience in learning to talk. It is possible that infants are born with built in speech capabilities, but it is also possible that auditory experience is necessary for learning to talk. Oller and Eilers proposed the idea that if deaf infants babble in the same typical patterns as regular infants, it would be evidence to support that humans are born with innate abilities to speak. In order to test this proposal, they needed to study what types of speech emerge at each stage of the first year of an infant’s life. By the canonical stage (7-10 months), infants generally utter sounds characterized by repetitions of certain sequences such as dadada or baba. Research has shown that deaf infants reach this stage later in life than regular hearing infants.

It has been moderately challenging to study deaf infants during the past because it is uncommon to diagnose hearing disabilities within the first year of a child’s life. It is also difficult to find deaf infants with no other impairments, who have had severely impaired hearing since birth and have been diagnosed within the first year of their lives.

In this experiment, 30 infants were analyzed, 9 of them being severely or profoundly hearing impaired. Each infant was measured in order to determine at what age they reached the canonical stage. The two groups were designated based on whether the infants were deaf or not. In both groups, infants babbling sequences were tape recorded in a quiet room with only the parent and the experimenter. The number of babbling sequences was counted by trained listeners for each infant. The listeners based their counting on 4 main criteria including if the infant used an identified vowel and consonant, the duration of a syllable and the usage of a normal pitch range. Vegetative and involuntary sounds such as coughs and growls were not recorded. The infants were prompted by their parents to vocalize their babbling while in the room. If they did not comply, or if the behavior was considered abnormal in comparison to their actions at home, they would reschedule the experiment.

Results showed that normal hearing infants reached the canonical stage of speech by 7-10 months. On the other hand, deaf infants did not reach this stage until after 10 months. When analyzing both groups at the same age, none of the deaf infants produced babbling sounds that could be qualified as being at the canonical stage. The hearing subjects were calculated at approximately 59 canonical utterances per infants. This is compared to the deaf subjects who babbled approximately 50 utterances, but 5-6 months later than the hearing subjects did.

Overall, hearing impaired infants show significant delays in reaching the canonical stage of language development. Oller and Eilers concluded this to be due to their inability to hear auditory speech. There is evidence to support the idea that hearing aids can assist infants in reaching babbling stages earlier. Completely deaf infants may never reach the canonical stage. Within the experiment, both groups of babies showed similar patterns of growls, squeals and whispers at the precanonical stage, but once the infant reached an age where language was to develop further, audition and modeling played a far more important role. This significantly leaves deaf children behind in the speech department.

Oller, D., & Eilers, R. E. (1988). The role of audition in infant babbling. Child Development, 59(2), 441-449. Doi:10.2307/113023

Amf14 (talk) 17:21, 28 February 2012 (UTC)[reply]

The Impact of Developmental Speech and Language Impairments on the Acquisition of Literacy Skills by Melanie Schuele

Previous studies have wrestled with the task of identifying speech/language impairments in children and determining the means by which they can be remedied. Language impairments are often precursors to life long communication difficulties as well as academic struggles. Hence, researchers past and present are focused on understanding speech/language impairments and finding solutions for children and adults alike. Schuele (2004) provides a review of previous studies that focus on differentiating and evaluating developmental speech impairments.

Individuals struggling with speech/language impairments are often referred to as language delayed, language disordered, language impaired, and/or language disabled. However, the review article defines and builds off of three key types: speech production impairments, oral language impairments, and speech production and oral language impairments. Furthermore, a distinction is made between developmental speech impairments: articulation disorders and phonological disorders. Articulation disorders have a motoric basis that result in difficulties to pronounce several speech sounds. For example, a child may substitute /w/ sounds for /r/; therefore, “rabbit” would sound like “wabbit.” Phonological Disorder (PD) is a cognitive-linguistic disorder that results in difficultly with multiple speech sounds and is detrimental to overall speech intelligibility.

Researchers distinguish between children with PD alone and children with PD + Language who are considered disabled based on their cognitive-linguistic abilities. In one study testing for reading disabilities, only 4% of the PD group showed a disability in word reading and 4% for comprehension. In contrast, within the PD + Language group 46% were classified as disordered in word reading and 25% were classified as disordered in reading comprehension.

A second study focused specifically on the differences between PD alone versus PD + Language. Children between the ages of 4 and 6 were assessed and evaluated upon their entry into third and fourth grade. Assessments revealed that PD +Language children had more severe speech deficits, low language scores, lack of cognitive-linguistic resources, and a family history of speech/language/learning disabilities compared to PD-alone children.

These studies highlight the importance of understanding and addressing speech/language difficulties in children. Children who struggle with a language condition, especially PD + language are at a very high risk for language impairment throughout childhood, adolescence, and potentially adulthood. Although this article did not focus on treatment, future research obstacles were outlined. The population challenge of testing preschoolers/early aged school children for language impairments results from a lack of reliable and valid material that can measure reading abilities and phonological awareness. In addition, children with language impairments spend more time trying to learn the basics of communication while their peer counterparts blaze ahead. The lack of cognitive-linguistic resources to devote to other tasks needs to be addressed when evaluating the efficacy of treatments.

Schuele, M. C. (2004). The impact of developmental speech and language impairments on the acquisition on literacy skills. Mental Retardation and Developmental Disabilities, 10, 176-183. Katelyn Warburton (talk) 20:52, 28 February 2012 (UTC)[reply]


Longitudinal Infant Speech Perception in Young Cochlear Implant Users

Much research has been done regarding speech perception in infants with normal hearing, especially in regards to phoneme discrimination in the first year of life. It has been shown that infants with normal hearing have surprisingly sophisticated systems of speech perception from the onset of life. This ability plays a critical part in the development of linguistics. Building on this fundamental concept of basic development of speech perception, Kristin Uhler and her colleagues set out to determine what the course of development would be for a child experiencing developmental challenges.

The present study is a case study exploring the development of speech perception in infants with hearing impairments who have received cochlear implants to aid their linguistic development. Specifically, the study aims to explore how speech perception began in children with new cochlear implants, if they are able to make discriminations in speech patterns, and how their development compares to a child with normal hearing. This research is of great importance because if children with cochlear implants can perceive speech in the same way as normal hearing children, they will be able to interact in a speaking world.

This study focused on case studies of seven children with normal hearing and three children with cochlear implants. Each child underwent speech perception testing in which they were asked to discriminate between two contrasting sounds. The number of sounds played as well as the difficulty was manipulated by the experimenters. They were ultimately measuring the number of head turns performed by the child based on the sounds played in the room. At the onset of the experiment, each child was placed in their caretaker’s lap. After hearing several initial and simple sounds, the children lost interest in the source of the sound. The child was then played slightly differing sounds and was conditioned to turn their head when they heard a difference. All testing took place in a room with double walled sound technology.

The results to these case studies revealed a great deal about the abilities of speech perception in children with cochlear implants. The first case study showed that prior to receiving a cochlear implant, no sounds were perceived to have been occurring in his environment. However, once the cochlear implant was activated, he was able to develop speech perception with accuracy of head turns slightly under that of a child with normal hearing. In the second case study, the child with the cochlear implant had even more promising success. After the implantation, he was able to discriminate many of the five core phoneme contrasts that each control child with normal hearing could discriminate. This child was almost able to normalize speech perception with the use of the cochlear implant except for the distinction between the phoneme /pa-ka/. The final case study showed complete normalization of speech perception with the use of a cochlear implant. This study also suggested that in both children with cochlear implants and normal hearing, vowels and voice onset time becomes prevalent in development before the ability to discriminate the place of articulation. These findings supported the predictions of the researchers.

Broader implications for this research include the application to the importance of linguistic development and phoneme discrimination in early infancy. These findings suggest that children with hearing impairments may be able to participate in this crucial development.

Uhler, K., Yoshinaga-Itano, C., Gabbard, S., Rothpletz, A. M., & Jenkins, H. (2011). Longitudinal infant speech perception in young cochlear implant users. Journal Of The American Academy Of Audiology, 22(3), 129-142. doi:10.3766/jaaa.22.3.2 Kfinsand (talk) 02:13, 29 February 2012 (UTC)[reply]


The Role of Talker-Specific Information in Word Segmentation by Infants by Derek M. Houston and Peter W. Jusczyk

When infants are introduced to human speech, they typically hear the majority of words in the form sentences and paragraphs, rather than single words. In fact, previous research found that only 7% of the speech heard by infants is considered to be in the form of isolated words. Although research proves that infants can identify single words produced by different speakers, Houston and Jusczyk aimed to find out if infants could identify these same similarities in the context of fluent speech.

For the initial study, 36 English-speaking 7.5-month-olds were presented with single words and full passages. The passages consisted of six sentences for each of four specific words (cup, dog, feet, bike). The participants were split into two groups: one heard Female Talker 1 first followed by Female Talker 2, and the other group heard the same speakers in the opposite order. Throughout the procedure, the infants’ head turn preference was measured.

The infants turned their head longer toward the familiar words presented by the second speaker, suggesting that infants could generalize these learned words across different speakers.

The second experiment also tested generalization of words across talkers, however the second speaker was replaced by an opposite sex speaker. In contrast with the initial study, the results showed no difference in head turn preference between the two speakers, indicating difficulty generalizing the words between the two speakers of different genders. Experiment three was aimed to mirror the methods and findings of the initial study by using two male speakers, instead of females. The results showed that infants were able to generalize across male speakers, just as they had across female speakers. In the fourth experiment, Houston and Jusczyk attempted to address the possibility that 10.5-month-olds might be able to generalize meaning across speakers of different genders. By replicating the second experiment, but instead using 10.5-month-old infants, they found that infants were able to generalize between speakers of different genders.

This study and the follow up experiments suggest that infants are able to generalize fluent speech between speakers, but only to a certain extent. While, 7.5-month-olds are able to generalize between two women and between two men, they are not able to generalize the fluent speech across genders. However, by the age of 10.5 months, the infants’ ability to generalize has increased and they are able to generalize between speakers of different genders.

Houston, D. M., & Jusczyk, P. W. (2000). The role of talker-specific information in word segmentation by infants. Journal Of Experimental Psychology: Human Perception And Performance, 26(5), 1570-1582. doi:10.1037/0096-1523.26.5.1570 Smassaro24 (talk) 06:53, 29 February 2012 (UTC)[reply]


Positional effects in the Lexical Retuning of Speech Perception by Alexandra Jesse & James McQueen Lino08 (talk) 15:01, 23 February 2012 (UTC)[reply]

In the melting pot that is American culture, people speak in many languages, accents, and dialects. It can be a challenge for listeners to always understand another person because pronunciation varies across speakers. Previous research has found that listeners use numerous sources of information in order to interpret the signal. People must also use their previous knowledge of how words should sound to help them acclimate to the differences in pronunciations of others. These ideas led the researchers to postulate that the speech-perception system benefits from all learning experiences because when word-specific knowledge is gained, understanding different talkers’ pronunciations from any position within a word becomes possible. The researchers followed up on this idea and tested whether having lexical knowledge from previously learned words still allows for the understanding of words when categorical sounds are rearranged.

In Experiment 1 of this study, the researchers created lists of 20 /f/-initial and 20 /s/-initial Dutch target words based on results of their pretest. They combined these 40 words with 60 filler words and 100 phonetically legal non-words. Ninety-eight Dutch university students with no hearing problems were chosen as participants and were randomly assigned to 1 of 2 groups. The /f/ training group was presented with 20 natural /s/ initial and 20 ambiguous /f/ initial words and vice versa. Both groups heard all 160 filler words. Participants had to quickly and accurately respond if the word they heard was a Dutch word or not. After the exposure phase, participants had to go through a test phase where they listened to /f/ and /s/ fricatives as either onsets or codas of words. They had to categorize as quickly and accurately as possible whether the sound they heard was an /f/ or an /s/. The independent variable was whether the participants were trained with ambiguous /f/ words or /s/ words, and the dependent variable was the reaction time in the test phase. In Experiment 2, word-final sounds were rearranged with syllable-initial sounds to test for the possible transfer of learning. The researchers keep the procedure the same in both experiments.

The results from the first experiment failed to show lexical retuning, meaning that they were not able to determine whether learning transfers across syllables in different positions. The results from the second experiment show that more [f] responses were given by the /f/ training groups than by the /s/ training groups which demonstrates that lexical rearranging and its transfer across different syllable positions. The researchers had mixed results with their findings relating to their expectations. In contrast to their hypothesis, the researchers found no evidence that lexical retuning occurs when ambiguous speech sounds are heard in the word-initial position. However, their findings did show that when sounds in different positions are matched acoustically, a person can generalize over the difference in position. The researchers concluded that retuning helps listeners recognize and understand the words of a speaker even if they have an unusual pronunciation.

Jesse, A. & McQueen J. (2011). Positional Effects in the Lexical Retuning of Speech Perception. Psychonomic Society, Inc. doi: 10.3758/s13423-011-0129-2


Influences of infant-directed speech on early word recognition by Leher Singh, Sarah Nestor, Chandni Parikh, & Ashley Yull. Misaacso (talk) 01:11, 29 February 2012 (UTC)[reply]

This study was done in hopes that knowledge could be gained regarding the influence of infant directed speech on long-term storage of words in a native language. Researchers wanted to know if the stimulus input style was influential on the capacity for long-term storage and ability to retrieve the information. It was important to discover if infant directed speech could have an influence on such aspects of word recognition before vocabulary production is evident in an infant.

When adults interact with infants, the speech used tends to be slower, have less sophisticated grammar, be composed of less content and is produced using a higher pitched voice. This child-directed speech, commonly termed infant directed speech, has also been shown in languages other than English. Previous research focused on phoneme perception, syntactic parsing, word segmentation, and boundary detection. Other research found evidence pointing to the ability of infants to generalize a novel talker regarding voice priming when the original and novel talker was producing test stimuli. Since past research did not cover the ability of infants to encode and retrieve words from their native language using infant directed speech and adult directed speech, a study regarding these abilities was prompted.

English-exposed infants of 7.5 months of age were exposed to either the stimulus of an adult using infant directed speech in the presence of the infant, or using adult directed speech toward another adult while the infant was absent. Listening time of passages when the familiarized word was in the infant directed speech condition, listening time of passages when the familiarized word was in the adult directed speech condition and listening time of passages where no familiarized word was present were being measured.

For each condition the infant would hear the words bike, hat, tree, or pear in various sentences. As the infants in the study sat on their care giver’s lap, flashing lights in front of the infant caused fixation which led to the center light being turned off and a light on either side of the infant to flash while the speech stimulus was presented. Familiarization occurred with both the infant and adult directed speech. The infants were tested 24 hours later to determine if the infants could recognize the words from the previous day.

The study concluded that infant directed speech is a key factor in recognizing words early in life proposing that although infants prefer this type of speech, it is even more beneficial as it can aid infants in retrieving and processing words. Infant directed speech also helps an infant generalize memory representations. Infant directed speech assists with storing words in the long-term and extending representation of words in the infant’s mind.

The conclusions of this research can prompt multiple directions of further inquiry. One such topic is of which attention-getting aspect of infant directed speech leads to the findings that were observed in this experiment. Another stem from this research is how words are associated with meaning to an infant as research has been completed for adults but not much is known on how this relates to infants.

Singh, Leher, Nestor, Sarah, Parikh, Chandni, & Yull, Ashley. (2009). Influences of infant-directed speech on early word recognition. Psychology Press, 14(6), 654-666. doi: 10.1080/15250000903263973 Misaacso (talk) 07:15, 1 March 2012 (UTC)[reply]


Early phonological awareness and reading skills in children with Down syndrome by Esther Kennedy and Mark Flynn

It is commonly known that individuals with Down syndrome are fully capable of acquiring reading skills. However, much less is known about the processes that lead to the development of their literary skills. Kennedy and Flynn are broadly looking at the literacy skills of the children with Down syndrome who participated in this study. They do so by picking apart the different levels of attaining literacy skills, specifically phonological awareness. The difficulty in studying this population is that the tests used to look at typically developing children must be adapted so that deficits in cognitive skills do not interfere with any of the areas they assessed. They adapted tasks to assess phonological awareness, literacy, speech production, expressive language, hearing acuity, speech perception, and auditory visual memory.

This study took place in New Zealand, and included nine children with Down syndrome. They were between the ages of five and ten, and all had at least six months exposure to formal literacy instruction in a mainstream school. Literacy teaching in New Zealand uses a “whole language” approach, and focuses on the meaning from the text. This means the children in this study had little to no history of phonologically based literacy instruction.

Because hearing impairment is prevalent in individuals with Down syndrome, hindering speech perception and auditory processing skills, an audiologist made sure the children could clearly hear throughout the study. To test short-term memory the subjects were asked to recall pictures they had studied of unrelated pictures of one, two and three syllables. To test speech production, the Assessment of Phonological Processing Revised (Hodson, 1986) was used to obtain a Percentage Consonants Correct from a list of 106 single words. To test expressive language a MLU was recorded of 50-100 intelligible utterances. They used two different methods to test reading. The first was the Burt Word Reading Test-New Zealand Revision (Gilmore, Croft & Reid, 1981). However, if the child was unintelligible, they requested a list of words the child could consistently read accurately. They also tested letter-sound knowledge, the children were asked to identify the letter that the investigator produced. The investigators divided the letters in an attempt to avoid misperceptions between letters that sound similar. They also avoided adding a vowel after voiced phonemes and lengthened them when possible. They used the example of “vvv” rather than “vuh.”

The results were correctly predicted by Kennedy and Flynn. Participants performed better on the tasks depending on how long they had been in school, and the tasks that used a spoken response were more difficult to score due to speech impairments. Participants with higher phoneme awareness skills had higher reading levels. However, only one participant was able to detect rhyming. This study looked solely at reading skills based on text decoding, not whether the participants were able to extract meaning from what they read. This study did not include a control group, and only had nine participants, which could have contributed to limitations.

Kennedy EJ, Flynn MC. Early phonological awareness and reading skills in children with Down syndrome. Down Syndrome Research and Practice. 2003;8(3);100-109. Lcannaday (talk) 00:48, 1 March 2012 (UTC) _____________________________________________________________________________________________________ Modified Spectral Tilt Affects Older, but Not Younger, Infants’ Native-Language Fricative Discrimination by Elizabeth Beach & Christine Kitamura[reply]

At birth, infants rely on basic auditory abilities to distinguish native and nonnative speech and up until 6 months of age prefer low-frequency infant-directed speech to adult-directed speech. They then begin to learn their native vowels and at 9 months also consonants. As infants’ ability to distinguish nonnative consonants decreases while their ability to distinguish native consonants improves they are said to go from a language-general to language-specific mode of speech perception. This led researchers Beach and Kitamura to find out how adjusting the frequency of native speech affects infants speech perception as they develop.

In this study, the ability of 6- and 9-month old infants to discriminate between fricative consonants /f/-/s/ at unmodified, high, and low frequencies was tested. 96 infants were assigned evenly to one of three conditions: normal speech unmodified, normal speech at a lower frequency, and normal speech at a higher frequency. The speech stimuli was four samples of /f/ and four /s/. Measures of overall duration and vowel frequency (F0) remained constant while measures of center of gravity and frequency of the second formant (F2) at vowel transition varied. Each infant was tested individually using a visual habituation procedure in which an auditory stimulus would be presented when the infant fixated on the display. Two no-change controls of the habituation stimulus were presented to ensure there was no spontaneous recovery. Control trials were then followed by two test trials, which alternated the test stimulus with the habituation stimulus.

Results showed that in the normal speech condition, regardless of age, infants increased their fixation durations in test trials compared with control trials. Both age groups showed evidence of discriminating /f/ versus /s/. In the low-frequency condition 6-month old infants had longer fixation periods than 9-month old infant. Both age groups discriminated /f/-/s/. In the high-frequency condition 6-month old infants showed a larger increase in fixation times. In addition, younger infants but not older infants were sensitive to fricative discrimination. 6-month old infants can discriminate /f/-/s/ regardless of speech modification but are best at unmodified or high-frequency conditions. In addition, 9-month olds could only discriminate /f/-/s/ under normal speech conditions or low-frequency conditions with their best performance in the normal conditions.

Based on acoustic modes of perception first used by infants, researchers predicted that amplifying a higher frequency would lead to an increased discrimination for both age groups. Results show evidence of this in 6-month olds but not 9-month olds. On a linguistic base, they predicted that 9-month olds would only be able to discriminate /f/-/s/ in the normal speech condition. The 9-month olds’ inability to discriminate high and low frequency conditions supports this.

This study will serve as a base for future research of speech perception in infants with hearing loss and bring us closer to providing infants with hearing loss the best amplification strategies to ensure best development of language skills.

Beach, E., & Kitamura, C. (2011). Modified spectral tilt affects older, but not younger, infants' native-language fricative discrimination. Journal Of Speech, Language, And Hearing Research, 54(2), 658-667. doi:10.1044/1092-4388(2010/08-0177)Mvanfoss (talk) 01:21, 1 March 2012 (UTC)[reply]

__________________________________________________________________________________________________

Maternal Speech to Infants in a Tonal Language: Support for Universal Prosodic Features in Motherese

Motherese, baby talk, and infant-directed speech are common words to describe the distinctive voice adults use when speaking to infants. Previous research identified that infant-directed speech has a unique acoustic quality or prosodic features. For example, a higher pitch and slower tempo, prosodic features, are consistently associated with motherese. Furthermore, this type of speech provides benefits related to language development for the infants. Since these results are so pervasive across English speaking mothers, DiAnne Grieser and Patricia Kuhl, attempted to test whether this prosodic pattern occurs across other languages. Specifically, they wanted to test tonal languages where a change in pitch alters the meaning of the word. This test will help determine if the pattern is universal.

In this experiment, there were eight monolingual women who spoke Mandarin Chinese and were mothers of an infant between six and ten weeks of age. Each woman was recorded as she spoke on the telephone to a Chinese-speaking friend or as she spoke to her infant that she held in her lap. The average fundamental frequency (FO), average pitch range for each sample recording, average pitch range for each phrase, average phrase duration, and average pause duration were recorded for the adult-to-adult (A-A) conversation and the adult-to-infant (A-I) conversation.

Overall, findings illustrated that fundamental frequency and pitch range, whether measured over the sample or individual phrases, significantly increase or shift upward when Mandarin mothers speak to their infants. In other words their pitch increases. Furthermore, the pause duration and phrase duration are altered when the mothers speak to their infants. They speak slower, shorten their phrases and increase the length of their pauses in comparison to speech directed at adults.

These results indicate that Mandarin motherese is very similar to English motherese. Therefore, the prosodic patterns (increased average pitch, lengthened pauses, and shortened phrases) in maternal speech to infants are not language-specific. This is a surprising result considering that the tonal language of Mandarin Chinese relies on changes in pitch to indicate word meaning. The question then arises whether or not a developmental change in Mandarin motherese must occur when infants approach the age of language acquisition in order for them to accurately understand the differences between words.

Since these findings are fairly robust, it is important to further understand the benefit this type of speech has on infants. More specifically, research should focus on the acoustic characteristics of motherese that capture the attention of infants. Research has identified that this universal languages exists, but now focus should turn to the purpose it serves.

Grieser, DiAnna L., & Kuhl, Patricia K. (1988). Maternal Speech to Infants in a Tonal Language: Support for Universal Prosodic Features in Motherese. Developmental Psychology, 14-20 TaylorDrenttel (talk) 01:28, 1 March 2012 (UTC)[reply]


Stuffed toys and speech perception

There is enormous variation in phonoeme pronunciation among speakers of the same language, and yet most speech perception models treat these variations as irrelevancies that are filtered out. In fact, these variations are correlated with the social characterisitcs of the speaker and listener — you change the way you speak depending on who you're talking to. Now, recent research shows that these variations go beyond just speakers: listeners actually perceive sounds differently depending on who they come from. Jennifer Hay and Katie Drager explored how robust this phenomenon is by testing if merely exposing New Zealanders to something Australian could modify their perceptions.

Subjects heard the same sentences, with a random change in accent. The /I/ sound was modified to sound more like an Australian accent or like a New Zealand accent, and all subjects heard all variations. The only difference between the two groups was the type of stuffed animal present -- either a koala, for the Australian condition, or a kiwi, for the New Zealand condition. After hearing each sentence, participants wrote on an answer sheet if it sounded like an Australian speaker or a New Zealand speaker had read it.

When the participants listened to the sentences with a koala nearby, they tended to perceive them as being like an Australian accent, especially in the transitory sentences where the /I/ phoneme was indistinguishable from Australian or New Zealand accents. Similarly, when the kiwi was present, participants were more likely to perceive the sentences as sounding more like a New Zealand accent.

The researchers had originally but skeptical that these results could be obtained. Hay had previously performed a similar experiment, and the results from this study corroborated with those from before. This suggested to the researchers that invoking ideas about a particular region or social aspect can alter the way a sentence is perceived.

Hay, J., & Drager, K. (2010). Stuffed toys and speech perception. Linguistics, 48(4), 865-892. doi:10.1515/LING.2010.027 AndFred (talk) 03:12, 1 March 2012 (UTC)[reply]


Infants listenfor more phonetic detail in speech perception than in word-learning task by Christine L. Stager & Janet F. Werker Hhoff12 (talk) 04:00, 1 March 2012 (UTC)[reply]


Phoneme Boundary Effect in Macque Monkeys

There has been debate over what specific characteristics of language are unique to humans. A popular approach to investigating this topic is to conduct studies to test possible innate language processes in animals and then compare these results to human subjects. There has been previous research on the nature and origins of the phoneme boundary effect. Many of these studies are centered on speech and non-speech comparisons along with looking at the difference between human and animal subjects.

Prior to this particular study on macque monkeys, there were five studies that compared perception of speech sounds between the two subject groups of animals and humans. These studies concluded that certain nonhuman species are able to perceptually partition speech continua in the region already defined by human listeners. In addition, animal subjects are able to discriminate stimulus pairs from speech-sound continua. To add to the data that had already been gathered, Kuhl and Padden aimed to extend the research to voiced and voiceless continua in order to further investigate phonetic boundaries.

In the study, Kuhl and Padden used three macaque monkeys as their subjects. The subjects were tested on their ability to distinguish between pairs of stimuli with voice and voiceless properties (ba-pa, da-ta, ga-ka). The subjects were restrained in chairs during the testing and were delivered the audio signals by an earphone in the right ear. There was a response key located in front of the subject along with a green and red light that were used to train the subject to respond at the correct time and in the correct way. In addition, an automatic feeder that dispensed applesauce was used as a positive reinforcement throughout the study.

During the procedure there were two types of trials; the subjects were presented with stimuli that were the same and stimuli that were different. These trial types were run with equal probability. The subject was required to determine if the stimuli were the same of different. This was done by pressing the response key for the full duration of the trial if the two stimuli were the same, and release the response key if they were the same.

Kuhl and Padden found that the subjects were able to discriminate between sounds that were phonetically different significantly better than they discriminated between sounds that were phonetically the same. These results were consistent with the results found in human subjects, both adults and infants. Due to the similarities of these results, it can be suggested that the phoneme-boundary effect is not exclusive to humans. The results of this data brought up different issues involving innate language processes including the relevance of animal data to human data and the role played by auditory constraints in the evolution of language. Further studies will be necessary in order to determine how far these results can be applied to the overall evolution of language.

Kuhl, P.K., Padden, D.M., (1982). Enhanced discriminability at the phonetic boundaries for the voicing feature in macaques. Perception and Psychophysics. doi: 10.3758/BF03204208 Anelso (talk)

This article was about speech remaining the same even if the extinction of carnonical acoustic phonemes of the spectrum. A portion of this perceptual flexibility can be attributed to modulation sensitivity in the auditory-to-phonetic projection. Three tests were conducted to estimate the effects of exposure to natural and sine-wave samples of speech in this kind of perceptual versatility Sine-waves are defined as synthesizing the voice differently and also by deleting particular phonemes. The first experiment was labeled a bench mark of intelligibility of easy and hard sine-wave words. This initial procedure aimed to determine a baseline difference in recognition performance between easy and hard words using test items created by modeling sine-wave synthesis on natural samples spoken by a single talker. The experimenters believed that sine-wave speech would be the same as the talker speaking. They used two sets of seventy-two words (easy and hard). The words different in 3 characteristics: mean frequency of occurance, mean neighborhood density (these words were, also, spoken by a male with a headset). The participants were twelve, English speaking, volunteers recruited from the undergraduate population of Barnard College and Colombia University. In this experiment the participants were to listen to the words and write them down in a booklet (guessing was encouraged). The results showed better results with easy words (42%) and hard words (25%). The second experiment was labeled as a test of the effect of exposure to sine-wave speech. In this experiment they compared exposure to the acoustic form of the contrasts and the idiolectal characteristics of a specific talker. Three kinds of exposure were provided, each to a different group of listeners, preliminary to the word recognition test: (a) sine-wave sentences based on speech of the same talker whose samples were used as models for the easy and hard words; (b) natural sentences of the talker whose utterances were used as models for the sine-wave words, to provide familiarity with the idiolect of the sine-wave words without also creating familiarity with sine-wave timbre; and (c) sine-wave sentences based on natural samples of a different talker, to familiarize listeners with the timbre of sine-wave speech without also producing experience of the idiolect of the talker who produced the models for the sine-wave words. They used two kind of test materials: sentences used in an exposure interval, and easy and hard sine-wave words used in a spoken word identification test. The three exposures were tested as so:Same Talker Natural, was a set of seven-teen natural utterances produced by one of the authors, the same talker whose speech served as the model for the easy and hard sine-wave words, a second set of exposure items, Same Talker SW, was a set of 17 sine-wave sentences, and the third set of exposure items, Different Talker SW, were 17 sine-wave sentences modeled on natural utterances spoken by one of the researchers. The participants were thirty-six volunteers were from the undergraduate population of Barnard College and Columbia University. Randomly assigned to 3 different groups of 12 listeners. The subjects were told to listen to a sentence 5 times (1 second between sentences and 3 seconds between trials). Following this portion of the test there were an identification of easy and hard words. The subjects were supposed to answer the phrases in a booklet. The results were the sentence transcriptions were scored and performance was uniformly good, with natural sentences transcribed nearly error-free and sine-wave sentences identified at a high performance level despite a difference between the talkers (Same Talker SW = 93% correct, Different Talker SW = 78% correct). Each of the 34 sine-wave sentences was identified correctly by several listeners. To summarize the results, easy words were recognized better than hard words in every condition; exposure to natural sentences of the talker whose utterances were used as models for the sine-wave words did not differ from no exposure, nor did it differ from exposure to sine-wave speech of a different talker. Recognition improved for easy and for hard words alike after exposure to sine-wave speech produced by the talker who spoke the natural models for the sine-wave words. The third experiment was labeled as a control test of uncertainty as the cause of performance differences in easy and hard sine-wave words This was a test to estimate residual effects on recognition attributable to the inherent properties of the synthetic test items themselves by imposing conditions that eliminated the contributions of signal-independent uncertainty by using a procedure to eliminate signal-independent effects on identification due to uncertainty, this test exposed any residual signal-dependent differences in word recognition caused by errors in estimating spectrotemporal properties when creating the sine-wave synthesis parameters They used the same easy and hard sine-wave words from the first experiment. Except this time they were arranged differently so that some started with the same letters and some ended with the same letters. There were twenty-four volunteers were recruited from the undergraduate population of Barnard College and Columbia University. Randomly assigned to 2 groups of 12. These participants were given 140 trials of words and wrote them down in a booklet (encouraged to guess). The results showed that between the two different groups (same beginning/ending and no similarities) they both scored close to the same. Even though there was no difference, the results were extremely well among the easy and hard words. Approximately 88 percent of the words were recalled correctly. The discussion concluded that a listener who has accommodated this extreme perturbation on the perception of speech expresses the epitome of versatility, and the three tests reported here aimed to calibrate the components of this perceptual feat by assessing signal-dependent and signal-independent functions.

Remez, Robert E.; Dubowski, Kathryn R.; Broder, Robin S.; Davids, Morgana L.; Grossman, Yael S.; Moskalenko, Marina; Pardo, Jennifer S.; Hasbun, Sara Maria; Journal of Experimental Psychology: Human Perception and Performance, Vol 37(3), Jun, 2011. pp. 968-977. Gmilbrat (talk)

Miller, Joanne L,; Mondini, Michele; Grosjean, Francois; Dommergues, Jean-Yves; Language and Speech: Dialect Effects in Speech Perception: The Role of Vowel Duration in Parisian French and Swiss French, Vol 54(4), p. 467-485. Sek12 (talk)

The experiments of this article ask the question of how native Parisian French and native Swiss French listeners use vowel duration in deciding what the contrast is between a short o and a long o in the words cotte and cote, in the French language. They wanted to see whether or not the listeners could perceive the difference between the words in both their native and "abstract" French language.

This research question is important because it is trying to answer whether or not the vowel duration and vowel contrast of the two vowels used in the experiments are noticeably perceivable between Parisian and Swiss French dialects, which are almost identical.

Previous research on this topic, also done by the same authors, used only vowel duration as the indicator of vowel identification. This previous research found that vowel duration played a much more important role in Swiss French than in Parisian French. Parisian French listeners identified the vowels only using spectral information while the Swiss French listeners used both spectral information and vowel duration to identify the vowels presented to them. The current study works to investigate deeper into the dialect difference that is present between the vowels in the study(a short /o/ and a long /o/) and the way the Parisian and Swiss French listeners perceive the difference between those vowels in words.

In Experiment 1 of this study the researchers created four speech series to find the best exemplars of vowel duration in both native languages. Two of the series were based on the language of Parisian and two were based on that of the Swiss French. Each of the four series used the word cotte with a short vowel and one with a long vowel and also included cote with the same differentiation. The variable measured in both studies was the vowel duration difference between the two native languages.

In Experiment 1 and 2 the procedure was the same. Sixteen native Parisian French and sixteen Swiss French participants were chosen. Four series of stimuli were created to be used in the study. Each series consisted of short and long duration vowels in the words cotte and cote and were based on the natural speech of both groups. All the participants in the study took part in two separate sessions. Each of these sessions entailed three parts: familiarization, practice, and test. In the familiarization phase the listeners were presented with stimuli and rated those stimuli on a scale of 1-7(1 being a poor exemplar and 7 being the best fit exemplar). No results were taken from the familiarization phase. In the practice phase the participants were presented with the same stimuli as they would be in the test phase, in random order. The last part of the experiment, the test phase, the participants were presented with 14 blocks of stimuli, giving a rating on the vowel duration.

The results indicate that Swiss French listeners judged that the longer vowels were the best exemplars of the short /o/ and long /o/ when they listened to the Swiss French series and the Parisian French series. For both Parisian and French Swiss listeners the best exemplar was judged as being the long vowel variation of the words used in the study. Both groups showed sensitivity to the vowel duration for both languages in both the short vowel /o/ and the long vowel /o/. The researchers expected that only a small range of vowel durations would be perceived by the listeners as good exemplars and they were correct in this expectation.

The conclusion of this study tells us that "taken together, the analyses indicate that, overall, short /o/ and long /o/ vowels are differentiated by duration in both dialects, but that the difference between the two vowels is greater in Swiss French than in Parisian French, owing to a longer /o/."

Word Processing

Emotion Words Affect Eye Fixations During Reading (Graham G. Scott, Patrick J. O'Donnell, and Sara C. Sereno) Katelyn Warburton (talk) 21:49, 28 February 2012 (UTC)[reply]

Previous research has evaluated the influence of “emotion words” on arousal, internal activation, and valence (value/worth). There is little disagreement that a reader’s response to emotion words can influence cognition, but physiological, biological, environmental, and mental influences remain understudied. This study evaluates the effect emotionality can have on lexical processes by tracking eye movements during fluent reading.

48 native English speaking participants with uncorrected vision were asked to read from a computer screen (ViewSonic 17GS CRT) while their right eye movements were monitored (by a Fourward Technologies Dual Purkinje Eyetracker). Arousal and valence values as well as frequencies for words were obtained and the values were averaged across categories. 24 sets of world triples including positive, negative, and neutral emotion words were presented to participants, with the target emotion words in the middle of the sentence. Participants were told that they would be asked yes and no questions after they read the sentence to ensure they were paying attention. After they read the sentence and answered the question, they were instructed to look at a small box on the screen while the tracker recalibrated. This occurred through all 24 trial sets.

In order to verify the plausibility of test materials, three additional tests were conducted utilizing different participants than the initial study. The first sub-study involved 18 participants who rated the plausibility of each emotion word appearing in a sentence. The second involved a similar task—participants were asked to rate the plausibility of an emotion word, but made the judgment from a sentence fragment not an entire sentence. Finally, 14 different participants were given a statement and asked to generate the corresponding emotion word. These three norming studies verified that the emotion words being used in the central study were plausible without being predictable.

This is the first study to analyze single emotion words in the context of fluent reading. Researchers found that participants had shorter fixation rates on positive and negative emotion words than neutral words. In addition, the influence of word frequency on fixation was facilitated by arousal levels. More specifically, low frequency words were facilitated by high levels of emotional arousal, either positive or negative. Therefore, emotional biases and word frequencies influence eye fixation while reading. The results of this study were consistent with previous research on emotion word processing, and furthered past studies by evaluating emotional word processing during fluent reading. In short, this study shows evidence of the important role of emotion in language processing. More specifically, that the emotional nature of a word—defined by arousal and valance characteristics—affects lexical access and therefore influences information processing. By following eye movements researchers were able to identify the rate at which words are recognized. This demonstrates that word meanings are activated and integrated quickly into reading context.

Scott, G. G., O'Donnell, P. J., & Sereno, S. C. (2012). Emotion words affect eye fixations during reading. Journal of Experimental Psychology: Learning, Memory, and Cognition, doi: 10.1037/a0027209


The Structural Organization of the Mental Lexicon and Its Contribution to Age-Related Declines in Spoken-Word Recognition Amf14 (talk) 04:22, 29 February 2012 (UTC)[reply]


Evidence for Sequential Processing in Visual Word Recognition (Peter J. Kwantes and Douglas J. K. Mewhort)

When reading a word, there can be many possible candidates for what the word will end up being but at a certain point, the uniqueness point (UP), only one possible option remains. Previous research by Radeau et al. tested the ability to encode words sequentially using UP, in terms of the position of the letter that signified the uniqueness point of a word. The study was to determine whether the UP followed a pattern such as speech recognition in which latency of words with an early UP was faster compared to a late UP, however the test showed opposite results. In an effort to explain these mixed results, Kwantes & Mewhort redefined the study using an orthographic uniqueness point which distinguishes a word when reading from left to right. The study aimed to determine whether words with an early-OUP could be identified faster than those with late-OUP.

The initial study involved twenty-five undergraduate students who were asked to name a series of seven-letter words aloud, as quickly as possible, while they were presented visually in a sequence on a screen. Half of the words had an OUP with Position 4 (early-OUP) and the other half had an OUP with Position 6 or 7 (late-OUP). The response time (RT) was measured from the onset of the word until the voice response began.

The reaction time results showed a clear advantage for early-OUP words, which were on average 29 ms faster than late-OUP words.

The second study aimed to measure whether the results in Experiment 1 truly reflected a process of production and pronunciation, or whether it depended on reading processes instead. Experiment 2 used a similar procedure to the previous study, but asked participants to read the word silently and then name it out loud when cued to do so. An early-OUP advantage was not detected when naming was delayed by a cue, suggesting no interaction with output or production processes. The third study also repeated Experiment 1, but instead took away the visual word stimulus after 200 ms, in order to focus on the effect of eye movement. Results for Experiment 3 showed similar results as Experiment, with an early-OUP advantage, suggesting that the early-OUP advantage is not a result of eye movement.

The three experiments suggest that the early-OUP advantage in word processing is a result of retrieval from the lexicon, without interference of reading processes or eye movement. The results affirm the predictions of the researchers and present possible reasons for Radeau et al.’s failure to find early-UP advantages. Orthographic uniqueness points display the important role of lexical processes within word recognition.

Kwantes, P. J., & Mewhort, D. K. (1999). Evidence for sequential processing in visual word recognition. Journal Of Experimental Psychology: Human Perception And Performance, 25(2), 376-381. doi:10.1037/0096-1523.25.2.376 Smassaro24 (talk) 16:22, 3 March 2012 (UTC)[reply]

Sentence Processing

Bilingualism