American Sign Language phonology

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Sign languages such as American Sign Language (ASL) are characterized by phonological processes analogous to, yet dissimilar from those of oral languages. Although there is a qualitative difference from oral languages in that sign-language phonemes are not based on sound, and are spatial in addition to being temporal, they fulfill the same role as phonemes in oral languages.

Basically, three types of signs are distinguished: one-handed signs, symmetric two-handed signs (i.e. signs in which both hands are active and perform the same or a similar action), and asymmetric two-handed signs (i.e. signs in which one hand is active [the 'dominant' or 'strong' hand] and one hand is held static [the 'non-dominant' or 'weak' hand]). The non-dominant hand in asymmetric signs often functions as the location of the sign. Almost all simple signs in ASL are monosyllabic.

Phonemes and features[edit]

Signs consist of units smaller than the sign. These are often subdivided into parameters: handshapes with a particular orientation, that may perform some type of movement, in a particular location on the body or in the "signing space", and non-manual signals. These may include movement of the eyebrows, the cheeks, the nose, the head, the torso, and the eyes. Parameter values are often equalled to spoken language phonemes, although sign language phonemes allow more simultaneity in their realization than phonemes in spoken languages. Phonemes in signed languages, as in oral languages, consist of features. For instance, the /B/ and /G/ handshapes are distinguished by the number of selected fingers: [all] versus [one].

Most phonological research focuses on the handshape. A problem in most studies of handshape is the fact that often elements of a manual alphabet are borrowed into signs, although not all of these elements are part of the sign language's phoneme inventory (Battison 1978). Also, allophones are sometimes considered separate phonemes. The first inventory of ASL handshapes contained 19 phonemes (or: cheremes, Stokoe, 1960). Later phonological models focus on handshape features rather than on handshapes (Liddell & Johnson 1984, Sandler 1989, Hulst, 1993, Brentari 1998, Van der Kooij 2002).

In some phonological models, movement is a phonological prime (Liddell & Johnson 1984, Perlmutter 1992, Brentari 1998). Other models movement consider movement as redundant, as it is predictable from the locations, hand orientations and handshape features at the start and end of a sign (Hulst, 1993, Van der Kooij, 2002). Models in which movement is a prime usually distinguish path movement (i.e. movement of the hand[s] through space) and internal movement (i.e. an opening or closing movement of the hand, a hand rotation, or finger wiggling).

Allophony and assimilation[edit]

Each phoneme may have multiple allophones, i.e. different realizations of the same phoneme. For example, in the /B/ handshape, the bending of the selected fingers may vary from straight to bent at the lowest joint, and the position of the thumb may vary from stretched at the side of the hand to folded in the palm of the hand. Allophony may be free, but is also often conditioned by the context of the phoneme. Thus, the /B/ handshape will be flexed in a sign in which the fingertips touch the body, and the thumb will be folded in the palm in signs where the radial side of the hand touches the body or the other hand.

Assimilation of sign phonemes to signs in the context is a common process in ASL. For example, the point of contact for signs like THINK, normally at the forehead, may be articulated at a lower location if the location in the following sign is below the cheek. Other assimilation processes concern the number of selected fingers in a sign, that may adapt to that of the previous or following sign. Also, has been observed that one-handed signs are articulated with two hands when followed by a two-handed signs.

Phonotactics[edit]

As yet, little is know about ASL phonotactic constraints (or those in other signed languages). The Symmetry and Dominance Conditions (Battison 1978) are sometimes assumed to be phonotactic constraints. The Symmetry Condition requires both hands in a symmetric two-handed sign to have the same or a mirrored configuration, orientation, and movement. The Dominance Condition requires that only one hand in a twohanded sign moves if the hands do not have the same handshape specifications, and that the non-dominant hand has an unmarked handshape. However, since these conditions seem to apply in more and more signed languages as cross-linguistic research increases, it is doubtful whether these should be considered as part of ASL phonotactics.

Suprasegmentals[edit]

Like most signed languages, ASL has an analogue to speaking loudly and whispering in oral language. In order to vary the "volume", the signer increases or reduces his signing. "Loud" signs are larger and more separated, sometimes even with one-handed signs being produced with both hands. "Whispered" signs are smaller, off-center, and sometimes (partially) blocked from sight to unintended onlookers by the speaker's body or a piece of clothing. In fast signing, in particular in context, sign movements are smaller and there may be less repetition. Signs occurring at the end of a phrase may show repetition or may be held ("phrase-final lengthening").

References[edit]

  • Battison, R. (1978) Lexical Borrowing in American Sign Language. Silver Spring, MD: Linstok Press.
  • Brentari, D. (1998) A Prosodic Model of Sign Language Phonology. Cambridge, MA: MIT Press.
  • Hulst, Harry van der. 1993. Units in the analysis of signs. Phonology 10, 209–241.
  • Liddell, Scott K. & Robert E. Johnson. 1989. American Sign Language: The phonological base. Sign Language Studies 64. 197–277.
  • Perlmutter, D. 1992. Sonority and syllable structure in American Sign Language. Linguistic Inquiry 23, 407-442.
  • Sandler, W.(1989) Phonological representation of the sign: linearity and nonlinearity in American Sign Language. Dordrecht: Foris.
  • Stokoe, W. (1960) Sign language structure. An outline of the visual communication systems of the American Deaf. (1993 Reprint ed.). Silver Spring, MD: Linstok Press.
  • Van der Kooij, E.(2002). Phonological Categories in Sign Language of the Netherlands. The Role of Phonetic Implementation and Iconicity. PhD Thesis, Universiteit Leiden, Leiden.