Jump to content

Paralanguage

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Bensin (talk | contribs) at 17:50, 6 March 2016 (Sighs: remove images that not necessarily are related to the topic). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Template:Redirect4 Paralanguage is a component of meta-communication that may modify or nuance meaning, or convey emotion, such as prosody, pitch, volume, intonation etc. It is sometimes defined as relating to nonphonemic properties only. Paralanguage may be expressed consciously or unconsciously. The study of paralanguage is known as paralinguistics, and was invented by George L. Trager in the 1950s, while he was working at the Foreign Service Institute of the Department of State. His colleagues at the time included Henry Lee Smith, Charles F. Hockett (working with him on using descriptive linguistics as a model for paralanguage), Edward T. Hall developing proxemics, and Ray Birdwhistell developing kinesics.[1] Trager published his conclusions in 1958,[2] 1960[3] and 1961.[4] His work has served as a basis for all later research, especially those investigating the relationship between paralanguage and culture (since paralanguage is learned, it differs by language and culture). A good example is the work of John J. Gumperz on language and social identity, which specifically describes paralinguistic differences between participants in intercultural interactions.[5] The film Gumperz made for BBC in 1982, Multiracial Britain: Crosstalk, does a particularly good job of demonstrating cultural differences in paralanguage, and the impact these have on relationships.

Paralinguistic information, because it is phenomenal, belongs to the external speech signal (Ferdinand de Saussure's parole) but not to the arbitrary conventional code of language (Saussure's langue).

The paralinguistic properties of speech play an important role in human communication. There are no utterances or speech signals that lack paralinguistic properties, since speech requires the presence of a voice that can be modulated. This voice must have some properties, and all the properties of a voice as such are paralinguistic. However, the distinction linguistic vs. paralinguistic applies not only to speech but to writing and sign language as well, and it is not bound to any sensory modality. Even vocal language has some paralinguistic as well as linguistic properties that can be seen (lip reading, McGurk effect), and even felt, e.g. by the Tadoma method.

Aspects of the speech signal

Perspectival aspects
Speech signals arrive at a listener’s ears with acoustic properties that may allow listeners to identify location of the speaker (sensing distance and direction, for example). Sound localization functions in a similar way also for non-speech sounds. The perspectival aspects of lip reading are more obvious and have more drastic effects when head turning is involved.
Organic aspects
The speech organs of different speakers differ in size. As children grow up, their organs of speech become larger and there are differences between male and female adults. The differences concern not only size, but also proportions. They affect the pitch of the voice and to a substantial extent also the formant frequencies, which characterize the different speech sounds. The organic quality of speech has a communicative function in a restricted sense, since it is merely informative about the speaker. It will be expressed independently of the speaker’s intention.
Expressive aspects
Paralinguistic cues such as loudness, rate, pitch, pitch contour, and to some extent formant frequencies of an utterance, contribute to the emotive or attitudinal quality of an utterance. Typically, attitudes are expressed intentionally and emotions without intention,[citation needed] but attempts to fake or to hide emotions are not unusual .[citation needed]

Consequently, paralinguistic cues relating to expression have a moderate effect of semantic marking. That is, a message may be made more or less coherent by adjusting its expressive presentation. For instance, upon hearing an utterance such as "I drink a glass of wine every night before I go to sleep" is coherent when made by a speaker identified as an adult, but registers a small semantic anomaly when made by a speaker identified as a child.[6] This anomaly is significant enough to be measured through electroencephalography, as an N400. Individuals with disorders along autism spectrum have a reduced sensitivity to this and similar effects.[7]

Emotional tone of voice, itself paralinguistic information, has been shown to affect the resolution of lexical ambiguity. Some words have homophonous partners; some of these homophones appear to have an implicit emotive quality, for instance the sad "die" contrasted with the neutral "dye"; uttering the sound /dai/ in a sad tone of voice can result in a listener writing that word significantly more often than if the word is uttered in a neutral tone.[8]

Linguistic aspects
Ordinary phonetic transcriptions of utterances reflect only the linguistically informative quality. The problem of how listeners factor out the linguistically informative quality from speech signals is a topic of current research.

Some of the linguistic features of speech, in particular of its prosody, are paralinguistic or pre-linguistic in origin. A most fundamental and widespread phenomenon of this kind is described by John Ohala as the "frequency code".[9] This code works even in communication across species. It has its origin in the fact that the acoustic frequencies in the voice of small vocalizers are high while they are low in the voice of large vocalizers. This gives rise to secondary meanings such as 'harmless', 'submissive', 'unassertive', which are naturally associated with smallness, while meanings such as 'dangerous', 'dominant', and 'assertive' are associated with largeness. In most languages, the frequency code also serves the purpose of distinguishing questions from statements. It is universally reflected in expressive variation, and it is reasonable to assume that it has phylogenetically given rise to the sexual dimorphism that lies behind the large difference in pitch between average female and male adults.

In text-only communication such as email, chatrooms and instant messaging, paralinguistic elements can be displayed by emoticons, font and color choices, capitalization and the use of non-alphabetic or abstract characters. Nonetheless, paralanguage in written communication is limited in comparison with face-to-face conversation, sometimes leading to misunderstandings.

Specific forms of paralinguistic respiration

Gasps

A gasp is a kind of paralinguistic respiration in the form of a sudden and sharp inhalation of air through the mouth. A gasp may indicate difficulty breathing, and a panicked effort to draw air into the lungs. Gasps also occur from an emotion of surprise, shock or disgust. Like a sigh, a yawn, or a moan, a gasp is often an automatic and unintentional act.[10] Gasping is closely related to sighing, and the inhalation characterizing a gasp induced by shock or surprise may be released as a sigh if the event causing the initial emotional reaction is determined to be less shocking or surprising than the observer first believed.[11]

As a symptom of physiological problems, apneustic respirations (a.k.a. apneusis), are gasps related to the brain damage associated with a stroke or other trauma.

Sighs

A sigh is a kind of paralinguistic respiration in the form of a deep and especially audible, single exhalation of air out of the mouth or nose, that humans use to communicate emotion. It is voiced pharyngeal fricative, sometimes associated with a guttural glottal breath exuded in a low tone. It often arises from a negative emotion, such as dismay, dissatisfaction, boredom, or futility.[10] A sigh can also arise from positive emotions such as relief,[12] particularly in response to some negative situation ending or being avoided. Like a gasp, a yawn, or a moan, a sigh is often an automatic and unintentional act.[10] In literature, a sigh is often used to signify that the person producing it is lovelorn.

Scientific studies show that babies sigh after 50 to 100 breaths. This serves to improve the mechanical properties of lung tissue, and it also helps babies to develop a regular breathing rhythm. Behaviors equivalent to sighing have also been observed in animals such as dogs, monkeys, and horses.

In text messages and internet chat rooms, or in comic books, a sigh is usually represented with the word itself, 'sigh', possibly within asterisks, *sigh*.

Physiology of paralinguistic comprehension

fMRI studies
Several studies have used the fMRI paradigm to observe brain states brought about by adjustments of paralinguistic information. One such study investigated the effect of interjections that differed along the criteria of lexical index (more or less "wordy") as well as neutral or emotional pronunciation; a higher hemodynamic response in auditory cortical gyri was found when more robust paralinguistic data was available. Some activation was found in lower brain structures such as the pons, perhaps indicating an emotional response.[13]

See also

References

  1. ^ Leeds-Hurwitz, W. (1990). Notes in the history of intercultural communication: The Foreign Service Institute and the mandate for intercultural training. Quarterly Journal of Speech, 76, 262-281.
  2. ^ Trager, G. L. (1958). Paralanguage: A first approximation. Studies in Linguistics, 13, 1-12.
  3. ^ Trager, G. L. (1960). Taos III: Paralanguage. Anthropological Linguistics, 2, 24-30.
  4. ^ Trager, G. L. (1961). The typology of paralanguage. Anthropological Linguistics, 3 (1), 17–21.
  5. ^ Gumperz, J. J. (1982). Discourse strategies. Cambridge: Cambridge University Press.
  6. ^ Van Berkum, J.J., Van den Brink, D., Tesink, C.M., Kos, M., & Hagoort, P. (2008). The neural integration of speaker and message. Journal of Cognitive Neuroscience, 20, 580–591.
  7. ^ Groen, W.B., Tesink, C., Petersson, K.M., Van Berkum, J., Van der Gaag, R.J., Hagoort, P. and Buitelaar, J.K. (2010). Semantic, factual, and social language comprehension in adolescents with autism: an fMRI study. Cerebral Cortex, 20(8), 1937-1945.
  8. ^ Nygaard, L.C., Lunders, E.R. (2002). Resolution of lexical ambiguity by emotional tone of voice. Memory & Cognition, 30(4), 583-593
  9. ^ Ohala, J. J. (1984) An ethological perspective on common cross-language utilization of F0 of voice. Phonetica, 41, 1-16.
  10. ^ a b c Rachel Broncher, A labor of love: a complete guide to childbirth for the mind, body, and soul (2004), p. 145.
  11. ^ Fernando Poyatos, Paralanguage: a linguistic and interdisciplinary approach to interactive speech and sounds (1993), page 330.
  12. ^ Paul Ekman, Emotions revealed: recognizing faces and feelings to improve communication (2007), p. 193.
  13. ^ Dietrich, S., Hertrich, I., Kai, A., Ischebeck, A., Ackermann, H. (2008). Understanding the emotional expression of verbal interjections: a functional MRI study. Brain Imaging, 19(18), 1751-1755.

Further reading

  • Cook, Guy (2001) The Discourse of Advertising. (second edition) London: Routledge. (chapter 4 on paralanguage and semiotics)
  • Robbins, S. and Langton, N. (2001) Organizational Behaviour: Concepts, Controversies, Applications (2nd Canadian ed.). Upper Saddle River, NJ: Prentice-Hall
  • Traunmüller, H. (2005) "Paralinguale Phänomene" (Paralinguistic phenomena), chapter 76 in: SOCIOLINGUISTICS An International Handbook of the Science of Language and Society, 2nd ed., U. Ammon, N. Dittmar, K. Mattheier, P. Trudgill (eds.), Vol. 1, pp 653–665. Walter de Gruyter, Berlin/New York.
  • Matthew McKay, Martha Davis, Patrick Fanning [1983] (1995) Messages: The Communication Skills Book, Second Edition, New Harbinger Publications, ISBN 1-57224-592-1, ISBN 978-1-57224-592-1, pp. 63–67.