Jump to content

Sonority hierarchy

From Wikipedia, the free encyclopedia

A sonority hierarchy or sonority scale is a hierarchical ranking of speech sounds (or phones). Sonority is loosely defined as the loudness of speech sounds relative to other sounds of the same pitch, length and stress,[1] therefore sonority is often related to rankings for phones to their amplitude.[2] For example, pronouncing the vowel [a] will produce a louder sound than the stop [t], so [a] would rank higher in the hierarchy. However, grounding sonority in amplitude is not universally accepted.[2] Instead, many researchers refer to sonority as the resonance of speech sounds.[2] This relates to the degree to which production of phones results in vibrations of air particles. Thus, sounds that are described as more sonorous are less subject to masking by ambient noises.[2]

Sonority hierarchies are especially important when analyzing syllable structure; rules about what segments may appear in onsets or codas together, such as SSP, are formulated in terms of the difference of their sonority values. Some languages also have assimilation rules based on sonority hierarchy, for example, the Finnish potential mood, in which a less sonorous segment changes to copy a more sonorous adjacent segment (e.g. -tne- → -nne-).

Sonority hierarchy[edit]

Sonority hierarchies vary somewhat in which sounds are grouped together. The one below is fairly typical:

vowels approximants
(glides and liquids)
nasals fricatives affricates stops
syllabic: + -
approximant: + -
sonorant: + -
continuant: + -
delayed release: + -

Sound types are the most sonorous on the left side of the scale, and become progressively less sonorous towards the right (e.g., fricatives are less sonorous than nasals).

The labels on the left refer to distinctive features, and categories of sounds can be grouped together according to whether they share a feature. For instance, as shown in the sonority hierarchy above, vowels are considered [+syllabic], whereas all consonants (including stops, affricates, fricatives, etc.) are considered [−syllabic]. All sound categories falling under [+sonorant] are sonorants, whereas those falling under [−sonorant] are obstruents. In this way, any contiguous set of sound types may be grouped together on the basis of no more than two features (for instance, glides, liquids, and nasals are [−syllabic, +sonorant]).

Sonority scale[edit]

Most sonorous (weakest consonantality) to
least sonorous (strongest consonantality)
English examples
low vowels (open vowels) /a ə/
mid vowels /e o/
high vowels (close vowels) / glides (semivowels) /i u j w/ (first two are close vowels, last two are semivowels)
flaps [ɾ]
laterals /l/
nasals /m n ŋ/
voiced fricatives /v ð z/
voiceless fricatives /f θ s/
voiced plosives /b d g/
voiceless plosives /p t k/


In English, the sonority scale, from highest to lowest, is the following: /a/ > /e o/ > /i u j w/ > /l/ > /m n ŋ/ > /z v ð/ > /f θ s/ > /b d ɡ/ > /p t k/[5][6]

In simpler terms, the scale has members of the same group hold the same sonority from the greatest to the smallest presence of vibrations in the vocal folds. Vowels have the most vibrations, but consonants are characterized as such in part by the lack of vibrations or a break in vibrations. The top of the scale, open vowels, has the most air used for vibrations, and the bottom of the scale has the least air being used for vibrations. That can be demonstrated by putting a few fingers on one's throat and pronouncing an open vowel such as the vowel [a], and then pronouncing one of the plosives (also known as stop consonants) of the [p t k] class. For vowels, there is a consistent level pressure generated from the lungs and diaphragm, and the difference in pressure in one's body and outside the mouth is minimal. For plosive, the pressure generated from the lungs and diaphragm changes significantly, and the difference in pressure in one's body and outside the mouth is maximal before release (no air is flowing, and the vocal folds are not resisting the air flow).

More finely-nuanced hierarchies often exist within classes whose members cannot be said to be distinguished by relative sonority. In North American English, for example, the set /p t k/ has /t/ being by far the most subject to weakening when before an unstressed vowel (the usual American pronunciation has /t/ as a flap in later but normally no weakening of /p/ in caper or of /k/ in faker).

In Portuguese, intervocalic /n/ and /l/ are typically lost historically (e.g. Lat. LUNA > /lua/ 'moon', DONARE > /doar/ 'donate', COLORE > /kor/ 'color'), but /r/ remains (CERA > /sera/ 'wax'), but Romanian has transformed the intervocalic non-geminate /l/ into /r/ (SOLEM > /so̯are/ 'sun') and reduced the geminate /ll/ to /l/ (OLLA > /o̯alə/ 'pot'). It has, however, left /n/ (LUNA > /lunə/ 'moon') and /r/ (PIRA > /parə/ 'pear') unchanged. Similarly, Romance languages often have geminate /mm/ weaker than /nn/, and geminate /rr/ is often stronger than other geminates, including /pp tt kk/. In such cases, many phonologists refer not to sonority but to a more abstract notion of relative strength. The latter was once posited as universal in its arrangement, but it is now known to be language-specific.

Sonority in phonotactics[edit]

Syllable structure tends to be highly influenced and motivated by the sonority scale, with the general rule that more sonorous elements are internal (i.e., close to the syllable nucleus) and less sonorant elements are external. For instance, the sequence /plant/ is permissible in many languages, while /lpatn/ is much less likely. (This is the sonority sequencing principle). This rule is applied with varying levels of strictness cross-linguistically, with many languages allowing exceptions: for example, in English, /s/ can be found external to stops even though it is more sonorous (e.g. "strong", "hats").

In many languages the presence of two non-adjacent highly-sonorous elements can be a reliable indication of how many syllables are in the word; /ata/ is most likely two syllables, and many languages would deal with the sequences like /mbe/ or /lpatn/ by pronouncing them as multiple syllables, with syllabic sonorants: [m̩.be] and [l̩.pat.n̩].

Ecological patterns in sonority[edit]

The sonority ranking of speech sounds plays an important role in developing phonological patterns in language, which allows for the intelligible transmission of speech between individuals in a society. Differences in the occurrence of particular sounds in languages around the world have been observed by numerous researchers. It has been suggested that these differences are as a result of ecological pressures.

This understanding was developed from the acoustic adaptation hypothesis, which was a theory initially used to understand differences in bird songs across varying habitats.[7] However, the theory has been applied by researchers as a base for understanding why differences are shown in speech sounds within spoken languages around the world.[8]


Maddieson and Coupé’s[8] study on 633 languages worldwide observed that some of the variation in the sonority of speech sounds in languages can be accounted for by differences in climate. The pattern follows that in warmer climatic zones, language is more sonorous compared to languages in cooler climatic zones which favour the use of consonants. To explain these differences they emphasise the influence of atmospheric absorption and turbulence within warmer, ambient air, which may disrupt the integrity of acoustic signals. Therefore, employing more sonorous sounds in a language may reduce the distortion of soundwaves in warmer climates. Fought and Munroe[9] instead argue that these disparities in speech sounds are as a result of differences in the daily activities of individuals in different climates. Proposing that throughout history individuals residing in warmer climates tend to spend more time outdoors (likely engaging in agricultural work or social activities), therefore speech requires effective propagation of sound through the air for acoustic signals to meet the recipient over these long distances, unlike in cooler climates where people are communicating over shorter distances (spend more time indoors). Another explanation is that languages have adapted to maintain homeostasis.[10] Thermoregulation aims to ensure body temperature remains within a certain range of values, allowing for the proper functioning of cells. Therefore, it has been argued that differences in the regularity of phones in a language are an adaptation which helps to regulate internal bodily temperatures. Employing the use of open vowels like /a/ which is highly sonorous, requires the opening of vocal articulators. This allows for air to flow out of the mouth and with it evaporating water which reduces internal bodily temperatures. In contrast, voiceless plosives like /t/ are more common in cooler climates. Producing this speech sound obstructs airflow out of the mouth due to the constriction of vocal articulators. Thus, reducing the transfer of heat out of the body, which is important for individuals residing in cooler climates.


A positive correlation exists, so that as temperature increases, so does the use of more sonorous speech sounds. However, the presence of dense vegetation coverage leads to the correlation occurring oppositely,[11] so that less sonorous speech sounds are favoured by warmer climates when the area is covered by dense vegetation. This is said to be because in warmer climates with dense vegetation coverage individuals instead communicate over shorter distances, therefore favour speech sounds which are ranked lower in the sonority hierarchy.


Everett, (2013)[12] suggested that in high elevation regions such as in the Andes, languages regularly employ the use of ejective plosives like //. Everett argued that in high altitude areas, with reduced ambient air pressure, the use of ejectives allows for ease of articulation when producing speech. Moreover, as no air is flowing out of the vocal folds, water is conserved whilst communicating, thus reducing dehydration in individuals residing in high elevation regions.

A range of other additional factors have also been observed which affect the degree of sonority of a particular language such as precipitation and sexual restrictiveness.[11] Inevitably, the patterns become more complex when considering a range of ecological factors simultaneously. Moreover, large amounts of variation are shown which may be due to patterns of migration.

Mechanisms underlying differences in sonority[edit]

The existence of these differences in speech sounds in modern day human language is said to be driven by cultural evolution.[13] Language is an important part of culture. In particular, speech sounds in the sonority scale are more likely to be selected for in different environments as a language favours phonetic structures which allow for the successful transmission of messages in the presence of ecological conditions. Henrich highlights the role of dual inheritance, which propels changes in language that persist across generations. It follows that slight differences in language patterns may be selected for because they are advantageous for individuals in the given environment. Biased transmission then occurs which allows for speech pattern to be adopted by members of the society.[13]


  1. ^ Peter Ladefoged; Keith Johnson (1 January 2010). A Course in Phonetics. Cengage Learning. ISBN 978-1-4282-3126-9.
  2. ^ a b c d Ohala, John J. (1992). "Alternatives to the sonority hierarchy for explaining segmental sequential constraints" (PDF). Papers on the Parasession on the Syllable: 319–338.
  3. ^ "What is the sonority scale?". www-01.sil.org. Archived from the original on 2017-06-13. Retrieved 2016-11-21.
  4. ^ Burquest, Donald A., and David L. Payne. 1993. Phonological analysis: A functional approach. Dallas, TX: Summer Institute of Linguistics. pg 101
  5. ^ O'Grady, W. D.; Archibald, J. (2012). Contemporary linguistic analysis: An introduction (7th ed.). Toronto: Pearson Longman. p. 70.
  6. ^ "Consonants: Fricatives". facweb.furman.edu. Archived from the original on 2018-09-17. Retrieved 2016-11-28.
  7. ^ Boncoraglio, Giuseppe; Saino, Nicola (2007). "Habitat structure and the evolution of bird song: a meta-analysis of the evidence for the acoustic adaptation hypothesis". Functional Ecology. 21 (1). doi:10.1111/j.1365-2435.2006.01207.x. ISSN 0269-8463.
  8. ^ a b Maddieson, Ian (2018). "Language Adapts to Environment: Sonority and Temperature". Frontiers in Communication. 3. doi:10.3389/fcomm.2018.00028. ISSN 2297-900X.
  9. ^ Fought, John G.; Munroe, Robert L.; Fought, Carmen R.; Good, Erin M. (2016). "Sonority and Climate in a World Sample of Languages: Findings and Prospects". Cross-Cultural Research. 38 (1): 27–51. doi:10.1177/1069397103259439. ISSN 1069-3971. S2CID 144410953.
  10. ^ Evert Van de Vliert (22 December 2008). Climate, Affluence, and Culture. Cambridge University Press. pp. 5–. ISBN 978-1-139-47579-2.
  11. ^ a b Ember, Carol R.; Ember, Melvin (2007). "Climate, Econiche, and Sexuality: Influences on Sonority in Language". American Anthropologist. 109 (1): 180–185. doi:10.1525/aa.2007.109.1.180. ISSN 0002-7294.
  12. ^ Aronoff, Mark; Everett, Caleb (2013). "Evidence for Direct Geographic Influences on Linguistic Sounds: The Case of Ejectives". PLOS ONE. 8 (6): e65275. Bibcode:2013PLoSO...865275E. doi:10.1371/journal.pone.0065275. ISSN 1932-6203. PMC 3680446. PMID 23776463.
  13. ^ a b Joseph Henrich (17 October 2017). The Secret of Our Success: How Culture Is Driving Human Evolution, Domesticating Our Species, and Making Us Smarter. Princeton University Press. ISBN 978-0-691-17843-1.

External links[edit]