Jump to content

Speech production

From Wikipedia, the free encyclopedia
(Redirected from Oromotor)

Speech production is the process by which thoughts are translated into speech. This includes the selection of words, the organization of relevant grammatical forms, and then the articulation of the resulting sounds by the motor system using the vocal apparatus. Speech production can be spontaneous such as when a person creates the words of a conversation, reactive such as when they name a picture or read aloud a written word, or imitative, such as in speech repetition. Speech production is not the same as language production since language can also be produced manually by signs.

In ordinary fluent conversation people pronounce roughly four syllables, ten or twelve phonemes and two to three words out of their vocabulary (that can contain 10 to 100 thousand words) each second.[1] Errors in speech production are relatively rare occurring at a rate of about once in every 900 words in spontaneous speech.[2] Words that are commonly spoken or learned early in life or easily imagined are quicker to say than ones that are rarely said, learnt later in life, or are abstract.[3][4]

Normally speech is created with pulmonary pressure provided by the lungs that generates sound by phonation through the glottis in the larynx that then is modified by the vocal tract into different vowels and consonants. However speech production can occur without the use of the lungs and glottis in alaryngeal speech by using the upper parts of the vocal tract. An example of such alaryngeal speech is Donald Duck talk.[5]

The vocal production of speech may be associated with the production of hand gestures that act to enhance the comprehensibility of what is being said.[6]

The development of speech production throughout an individual's life starts from an infant's first babble and is transformed into fully developed speech by the age of five.[7] The first stage of speech doesn't occur until around age one (holophrastic phase). Between the ages of one and a half and two and a half the infant can produce short sentences (telegraphic phase). After two and a half years the infant develops systems of lemmas used in speech production. Around four or five the child's lemmas are largely increased; this enhances the child's production of correct speech and they can now produce speech like an adult. An adult now develops speech in four stages: Activation of lexical concepts, select lemmas needed, morphologically and phonologically encode speech, and the word is phonetically encoded.[7]

Three stages

[edit]

The production of spoken language involves three major levels of processing: conceptualization, formulation, and articulation.[1][8][9]

The first is the processes of conceptualization or conceptual preparation, in which the intention to create speech links a desired concept to the particular spoken words to be expressed. Here the preverbal intended messages are formulated that specify the concepts to be expressed.[10]

The second stage is formulation in which the linguistic form required for the expression of the desired message is created. Formulation includes grammatical encoding, morpho-phonological encoding, and phonetic encoding.[10] Grammatical encoding is the process of selecting the appropriate syntactic word or lemma. The selected lemma then activates the appropriate syntactic frame for the conceptualized message. Morpho-phonological encoding is the process of breaking words down into syllables to be produced in overt speech. Syllabification is dependent on the preceding and proceeding words, for instance: I-com-pre-hend vs. I-com-pre-hen-dit.[10] The final part of the formulation stage is phonetic encoding. This involves the activation of articulatory gestures dependent on the syllables selected in the morpho-phonological process, creating an articulatory score as the utterance is pieced together and the order of movements of the vocal apparatus is completed.[10]

The third stage of speech production is articulation, which is the execution of the articulatory score by the lungs, glottis, larynx, tongue, lips, jaw and other parts of the vocal apparatus resulting in speech.[8][10]

For right handed people, the majority of speech production activity occurs in the left cerebral hemisphere.

Neuroscience

[edit]

The motor control for speech production in right handed people depends mostly upon areas in the left cerebral hemisphere. These areas include the bilateral supplementary motor area, the left posterior inferior frontal gyrus, the left insula, the left primary motor cortex and temporal cortex.[11] There are also subcortical areas involved such as the basal ganglia and cerebellum.[12][13] The cerebellum aids the sequencing of speech syllables into fast, smooth and rhythmically organized words and longer utterances.[13]

Disorders

[edit]

Speech production can be affected by several disorders:

History of speech production research

[edit]
Examples of speech errors. The target is what the speaker intended to say. The error is what the speaker actually said. These mistakes have been studied to learn about the structure of speech production.

Until the late 1960s research on speech was focused on comprehension. As researchers collected greater volumes of speech error data, they began to investigate the psychological processes responsible for the production of speech sounds and to contemplate possible processes for fluent speech.[14] Findings from speech error research were soon incorporated into speech production models. Evidence from speech error data supports the following conclusions about speech production.

Some of these ideas include:

  1. Speech is planned in advance.[15]
  2. The lexicon is organized both semantically and phonologically.[15] That is by meaning, and by the sound of the words.
  3. Morphologically complex words are assembled.[15] Words that we produce that contain morphemes are put together during the speech production process. Morphemes are the smallest units of language that contain meaning. For example, "ed" on a past tense word.
  4. Affixes and functors behave differently from context words in slips of the tongue.[15] This means the rules about the ways in which a word can be used are likely stored with them, which means generally when speech errors are made, the mistake words maintain their functions and make grammatical sense.
  5. Speech errors reflect rule knowledge.[15] Even in our mistakes, speech is not nonsensical. The words and sentences that are produced in speech errors are typically grammatical, and do not violate the rules of the language being spoken.

Aspects of speech production models

[edit]

Models of speech production must contain specific elements to be viable. These include the elements from which speech is composed, listed below. The accepted models of speech production discussed in more detail below all incorporate these stages either explicitly or implicitly, and the ones that are now outdated or disputed have been criticized for overlooking one or more of the following stages.[16]

The attributes of accepted speech models are:

a) a conceptual stage where the speaker abstractly identifies what they wish to express.[16]

b) a syntactic stage where a frame is chosen that words will be placed into, this frame is usually sentence structure.[16]

c) a lexical stage where a search for a word occurs based on meaning. Once the word is selected and retrieved, information about it becomes available to the speaker involving phonology and morphology.[16]

d) a phonological stage where the abstract information is converted into a speech like form.[16]

e) a phonetic stage where instructions are prepared to be sent to the muscles of articulation.[16]

Also, models must allow for forward planning mechanisms, a buffer, and a monitoring mechanism.

Following are a few of the influential models of speech production that account for or incorporate the previously mentioned stages and include information discovered as a result of speech error studies and other disfluency data,[17] such as tip-of-the-tongue research.

Model

[edit]

The Utterance Generator Model (1971)

[edit]

The Utterance Generator Model was proposed by Fromkin (1971).[18] It is composed of six stages and was an attempt to account for the previous findings of speech error research. The stages of the Utterance Generator Model were based on possible changes in representations of a particular utterance. The first stage is where a person generates the meaning they wish to convey. The second stage involves the message being translated onto a syntactic structure. Here, the message is given an outline.[19] The third stage proposed by Fromkin is where/when the message gains different stresses and intonations based on the meaning. The fourth stage Fromkin suggested is concerned with the selection of words from the lexicon. After the words have been selected in Stage 4, the message undergoes phonological specification.[20] The fifth stage applies rules of pronunciation and produces syllables that are to be outputted. The sixth and final stage of Fromkin's Utterance Generator Model is the coordination of the motor commands necessary for speech. Here, phonetic features of the message are sent to the relevant muscles of the vocal tract so that the intended message can be produced. Despite the ingenuity of Fromkin's model, researchers have criticized this interpretation of speech production. Although The Utterance Generator Model accounts for many nuances and data found by speech error studies, researchers decided it still had room to be improved.[21][22]

The Garrett model (1975)

[edit]

A more recent (than Fromkin's) attempt to explain speech production was published by Garrett in 1975.[23] Garrett also created this model by compiling speech error data. There are many overlaps between this model and the Fromkin model from which it was based, but he added a few things to the Fromkin model that filled some of the gaps being pointed out by other researchers. The Garrett Fromkin models both distinguish between three levels—a conceptual level, and sentence level, and a motor level. These three levels are common to contemporary understanding of Speech Production.[24]

This is an interpretation of the Dell's model. The words at the top represent the semantic category. The second level represents the words that denote the semantic category. The third level represents the phonemes (syllabic information including onset, vowels, and codas).

Dell's model (1994)

[edit]

In 1994,[25] Dell proposed a model of the lexical network that became fundamental in the understanding of the way speech is produced.[1] This model of the lexical network attempts to symbolically represent the lexicon, and in turn, explain how people choose the words they wish to produce, and how those words are to be organized into speech. Dell's model was composed of three stages, semantics, words, and phonemes. The words in the highest stage of the model represent the semantic category. (In the image, the words representing semantic category are winter, footwear, feet, and snow represent the semantic categories of boot and skate.) The second level represents the words that refer to the semantic category (In the image, boot and skate). And, the third level represents the phonemes ( syllabic information including onset, vowels, and codas).[26]

Levelt model (1999)

[edit]

Levelt further refined the lexical network proposed by Dell. Through the use of speech error data, Levelt recreated the three levels in Dell's model. The conceptual stratum, the top and most abstract level, contains information a person has about ideas of particular concepts.[27] The conceptual stratum also contains ideas about how concepts relate to each other. This is where word selection would occur, a person would choose which words they wish to express. The next, or middle level, the lemma-stratum, contains information about the syntactic functions of individual words including tense and function.[1] This level functions to maintain syntax and place words correctly into sentence structure that makes sense to the speaker.[27] The lowest and final level is the form stratum which, similarly to the Dell Model, contains syllabic information. From here, the information stored at the form stratum level is sent to the motor cortex where the vocal apparatus are coordinated to physically produce speech sounds.

Places of articulation

[edit]
Human vocal apparatus used to produce speech

The physical structure of the human nose, throat, and vocal cords allows for the productions of many unique sounds, these areas can be further broken down into places of articulation. Different sounds are produced in different areas, and with different muscles and breathing techniques.[28] Our ability to utilize these skills to create the various sounds needed to communicate effectively is essential to our speech production. Speech is a psychomotor activity. Speech between two people is a conversation - they can be casual, formal, factual, or transactional, and the language structure/ narrative genre employed differs depending upon the context. Affect is a significant factor that controls speech, manifestations that disrupt memory in language use due to affect include feelings of tension, states of apprehension, as well as physical signs like nausea. Language level manifestations that affect brings could be observed with the speaker's hesitations, repetitions, false starts, incompletion, syntactic blends, etc. Difficulties in manner of articulation can contribute to speech difficulties and impediments.[29] It is suggested that infants are capable of making the entire spectrum of possible vowel and consonant sounds. IPA has created a system for understanding and categorizing all possible speech sounds, which includes information about the way in which the sound is produced, and where the sounds is produced.[29] This is extremely useful in the understanding of speech production because speech can be transcribed based on sounds rather than spelling, which may be misleading depending on the language being spoken. Average speaking rates are in the 120 to 150 words per minute (wpm) range, and same is the recommended guidelines for recording audiobooks. As people grow accustomed to a particular language they are prone to lose not only the ability to produce certain speech sounds, but also to distinguish between these sounds.[29]

Articulation

[edit]

Articulation, often associated with speech production, is how people physically produce speech sounds. For people who speak fluently, articulation is automatic and allows 15 speech sounds to be produced per second.[30]

An effective articulation of speech include the following elements – fluency, complexity, accuracy, and comprehensibility.[31]

  • Fluency: Is the ability to communicate an intended message, or to affect the listener in the way that is intended by the speaker. While accurate use of language is a component in this ability, over-attention to accuracy may actually inhibit the development of fluency. Fluency involves constructing coherent utterances and stretches of speech, to respond and to speak without undue hesitation (limited use of fillers such as uh, er, eh, like, you know). It also involves the ability to use strategies such as simplification and gestures to aid communication. Fluency involves use of relevant information, appropriate vocabulary and syntax.
  • Complexity: Speech where the message is communicated precisely. Ability to adjust the message or negotiate the control of conversation according to the responses of the listener, and use subordination and clausal forms appropriate per the roles and relationship between the speakers. It includes the use of sociolinguistic knowledge – the skills required to communicate effectively across cultures; the norms, the knowledge of what is appropriate to say in what situations and to whom.
  • Accuracy: This refers to the use of proper and advanced grammar; subject-verb agreement; word order; and word form (excited/exciting), as well as appropriate word choice in spoken language. It is also the ability to self-correct during discourse, to clarify or modify spoken language for grammatical accuracy.
  • Comprehensibility: This is the ability to be understood by others, it is related with the sound of the language. There are three components that influence one’s comprehensibility and they are: Pronunciation – saying the sounds of words correctly; Intonation – applying proper stress on words and syllables, using rising and falling pitch to indicate questions or statements, using voice to indicate emotion or emphasis, speaking with an appropriate rhythm; and Enunciation – speaking clearly at an appropriate pace, with effective articulation of words and phrases and appropriate volume.

Development

[edit]

Before even producing a sound, infants imitate facial expressions and movements.[32] Around 7 months of age, infants start to experiment with communicative sounds by trying to coordinate producing sound with opening and closing their mouths.

Until the first year of life infants cannot produce coherent words, instead they produce a reoccurring babbling sound. Babbling allows the infant to experiment with articulating sounds without having to attend to meaning. This repeated babbling starts the initial production of speech. Babbling works with object permanence and understanding of location to support the networks of our first lexical items or words.[7] The infant’s vocabulary growth increases substantially when they are able to understand that objects exist even when they are not present.

The first stage of meaningful speech does not occur until around the age of one. This stage is the holophrastic phase.[33] The holistic stage refers to when infant speech consists of one word at a time (i.e. papa).

The next stage is the telegraphic phase. In this stage infants can form short sentences (i.e., Daddy sit, or Mommy drink). This typically occurs between the ages of one and a half and two and a half years old. This stage is particularly noteworthy because of the explosive growth of their lexicon. During this stage, infants must select and match stored representations of words to the specific perceptual target word in order to convey meaning or concepts.[32] With enough vocabulary, infants begin to extract sound patterns, and they learn to break down words into phonological segments, increasing further the number of words they can learn.[7] At this point in an infant's development of speech their lexicon consists of 200 words or more and they are able to understand even more than they can speak.[33]

When they reach two and a half years their speech production becomes increasingly complex, particularly in its semantic structure. With a more detailed semantic network the infant learns to express a wider range of meanings, helping the infant develop a complex conceptual system of lemmas.

Around the age of four or five the child lemmas have a wide range of diversity, this helps them select the right lemma needed to produce correct speech.[7] Reading to infants enhances their lexicon. At this age, children who have been read to and are exposed to more uncommon and complex words have 32 million more words than a child who is linguistically impoverished.[34] At this age the child should be able to speak in full complete sentences, similar to an adult.

See also

[edit]

References

[edit]
  1. ^ a b c d Levelt, WJ (1999). "Models of word production" (PDF). Trends in Cognitive Sciences. 3 (6): 223–232. doi:10.1016/S1364-6613(99)01319-4. PMID 10354575. S2CID 7939521.
  2. ^ Garnham, A, Shillcock RC, Brown GDA, Mill AID, Culter A (1981). "Slips of the tongue in the London–Lund corpus of spontaneous conversation" (PDF). Linguistics. 19 (7–8): 805–817. doi:10.1515/ling.1981.19.7-8.805. hdl:11858/00-001M-0000-0013-33D0-4. S2CID 144105729. Archived from the original (PDF) on 2011-10-08. Retrieved 2009-12-25.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  3. ^ Oldfield RC, Wingfield A (1965). "Response latencies in naming objects". Quarterly Journal of Experimental Psychology. 17 (4): 273–281. doi:10.1080/17470216508416445. hdl:11858/00-001M-0000-002C-17D2-8. PMID 5852918. S2CID 9567809.
  4. ^ Bird, H; Franklin, S; Howard, D (2001). "Age of acquisition and imageability ratings for a large set of words, including verbs and function words" (PDF). Behavior Research Methods, Instruments, and Computers. 33 (1): 73–9. doi:10.3758/BF03195349. PMID 11296722.[permanent dead link]
  5. ^ Weinberg, Bernd; Westerhouse, Jan (1971). "A Study of Buccal Speech". Journal of Speech and Hearing Research. 14 (3). American Speech Language Hearing Association: 652–658. doi:10.1044/jshr.1403.652. ISSN 0022-4685. PMID 5163900. also published as Weinberg, B.; Westerhouse, J. (1972). "A Study of Buccal Speech". The Journal of the Acoustical Society of America. 51 (1A). Acoustical Society of America (ASA): 91. Bibcode:1972ASAJ...51Q..91W. doi:10.1121/1.1981697. ISSN 0001-4966.
  6. ^ McNeill D (2005). Gesture and Thought. University of Chicago Press. ISBN 978-0-226-51463-5.
  7. ^ a b c d e Harley, T.A. (2011), Psycholinguistics. (Volume 1). SAGE Publications.
  8. ^ a b Levelt, WJM (1989). Speaking: From Intention to Articulation. MIT Press. ISBN 978-0-262-62089-5.
  9. ^ Jescheniak, JD; Levelt, WJM (1994). "Word frequency effects in speech production: retrieval of syntactic information and of phonological form". Journal of Experimental Psychology: Learning, Memory, and Cognition. 20 (4): 824–843. CiteSeerX 10.1.1.133.3919. doi:10.1037/0278-7393.20.4.824. S2CID 26291682.
  10. ^ a b c d e Levelt, W. (1999). "The neurocognition of language", p.87 -117. Oxford Press
  11. ^ Indefrey, P; Levelt, WJ (2004). "The spatial and temporal signatures of word production components". Cognition. 92 (1–2): 101–44. doi:10.1016/j.cognition.2002.06.001. hdl:11858/00-001M-0000-0013-1C1F-E. PMID 15037128. S2CID 12662702.
  12. ^ Booth, JR; Wood, L; Lu, D; Houk, JC; Bitan, T (2007). "The role of the basal ganglia and cerebellum in language processing". Brain Research. 1133 (1): 136–44. doi:10.1016/j.brainres.2006.11.074. PMC 2424405. PMID 17189619.
  13. ^ a b Ackermann, H (2008). "Cerebellar contributions to speech production and speech perception: psycholinguistic and neurobiological perspectives". Trends in Neurosciences. 31 (6): 265–72. doi:10.1016/j.tins.2008.02.011. PMID 18471906. S2CID 23476697.
  14. ^ Fromkin, Victoria; Bernstein, Nan (1998). Speech Production Processing Models. Harcourt Brace College. p. 327. ISBN 978-0155041066.
  15. ^ a b c d e Fromkin, Victoria; Berstien Ratner, Nan (1998). Chapter 7 Speech Production. Harcourt Brace College. pp. 322–327. ISBN 978-0155041066.
  16. ^ a b c d e f Field, John (2004). Psycholinguistics. Routledge. p. 284. ISBN 978-0415258906.
  17. ^ Fromkin, Victoria; Berstein, Nan (1998). Speech Production Processing Models. Harcourt Brace College Publishers. p. 328. ISBN 978-0155041066.
  18. ^ Fromkin, Victoria; Bernstein Ratner, Nan (1998). Chapter 7 Speech Production (Second ed.). Florida: Harcourt Brace College Publishers. pp. 328–337. ISBN 978-0155041066.
  19. ^ Fromkin, Victoria (1971). Utterance Generator Model of Speech Production in Psycho-linguistics (2 ed.). Harcourt College Publishers. p. 328. ISBN 978-0155041066.
  20. ^ Fromkin, Victoria (1998). Utterance Generator Model of Speech Production in Psycho-linguistics (2 ed.). Harcourt. p. 330.
  21. ^ Garrett (1975). The Garrett Model in Psycho-linguistics. Harcourt College. p. 331. ISBN 978-0155041066.
  22. ^ Butterworth (1982). Psycho-linguistics. Harcourt College. p. 331.
  23. ^ Fromkin, Victoria; Berstein, Nan (1998). The Garrett Model in Psycho-linguistics. Harcourt College. p. 331. ISBN 978-0155041066.
  24. ^ Garrett; Fromkin, V.A.; Ratner, N.B (1998). The Garrett Model in Psycho-linguistics. Harcourt College. p. 331.
  25. ^ "Psycholinguistics/Models of Speech Production - Wikiversity". en.wikiversity.org. Retrieved 2015-11-16.
  26. ^ Dell, G.S. (1997). Analysis of Sentence Production. Vol. 9. pp. 133–177. doi:10.1016/S0079-7421(08)60270-4. ISBN 9780125433099. {{cite book}}: |journal= ignored (help)
  27. ^ a b Levelt, Willem J.M (1999). "Models of Word Production". Trends in Cognitive Sciences. 3 (6): 223–232. doi:10.1016/S1364-6613(99)01319-4. PMID 10354575. S2CID 7939521.
  28. ^ Keren, Rice (2011). Consonantal Place of Articulation. John Wily & Sons Inc. pp. 519–549. ISBN 978-1-4443-3526-2.
  29. ^ a b c Harrison, Allen. E (2011). Speech Disorders : Causes, Treatments, and Social Effects. New York: Nova Science Publishers. ISBN 9781608762132.
  30. ^ Field, John (2004). Psycholinguistics The Key Concepts. Routledge. pp. 18–19. ISBN 978-0415258906.
  31. ^ Rebecca Hughes; Beatrice Szczepek Reed (2016). Teaching and Researching Speaking: Third Edition. Taylor & Francis. pp. 6+. ISBN 978-1-317-43299-9.
  32. ^ a b Redford, M. A. (2015). The handbook of speech production. Chichester, West Sussex; Malden, MA : John Wiley & Sons, Ltd, 2015.
  33. ^ a b Shaffer, D., Wood, E., & Willoughby, T. (2005). Developmental Psychology Childhood and Adolescence. (2nd Canadian Ed). Nelson.
  34. ^ Wolf, M. (2005). Proust and the squid:The story and science of the reading brain, New York, NY. Harper

Further reading

[edit]