User:Barefoot Banana/sandbox

From Wikipedia, the free encyclopedia

Major levels of linguistic structure. Phonology is shown encompassed by morphology and encompassing phonetics.

Phonology is the branch of linguistics that studies the systematic organization of the units of language that do not have any meaning in and of themselves. For spoken languages, such units are phones, tones, features, or larger units such as syllables and other prosodic domains.[1] For sign languages, phonology investigates the constituent parts of signs. These are specifications for movement, location, and handshape.[2] [3]

What phonologists study and which position it take in within linguistics

Etymology and definition[edit]

The word phonology comes from Ancient Greek φωνή, phōnḗ, 'voice, sound', and the suffix -logy (which is from Greek λόγος, lógos, 'word, speech, subject of discussion'). It refers to one of the fundamental systems that a language is considered to comprise, like its syntax, its morphology and its lexicon. The term can also refer to the sound or sign system of a particular language variety, e.g. "the phonology of English".[4]

Phonology is usually distinguished from phonetics, which concerns the physical production, acoustic transmission and perception of language.[5][6] In general, the object of study in phonetics is something that can be measured such as tongue posture, frequencies within an acoustic signal, or the auditory/visual processing time of a language stimulus. Phonology, on the other hand, typically describes the categorical properties that linguistic units such as speech sounds have. For many linguists, phonetics belongs to descriptive linguistics and phonology to theoretical linguistics, but establishing the phonological system of a language is necessarily an application of theoretical principles to analysis of phonetic evidence in some theories. The distinction was not always made, particularly before the development of the modern concept of the phoneme in the mid-20th century. Some subfields of modern phonology have a crossover with phonetics in descriptive disciplines such as psycholinguistics and speech perception, which has resulted in specific areas like articulatory phonology or laboratory phonology.

Definitions of the field of phonology vary. Nikolai Trubetzkoy in Grundzüge der Phonologie (1939) defines phonology as "the study of sound pertaining to the system of language," as opposed to phonetics, which is "the study of sound pertaining to the act of speech" (the distinction between language and speech being basically Ferdinand de Saussure's distinction between langue and parole).[7] More recently, Lass (1998) writes that phonology refers broadly to the subdiscipline of linguistics concerned with the sounds of language, and in more narrow terms, "phonology proper is concerned with the function, behavior and organization of sounds as linguistic items."[5] According to Clark et al. (2007), it means the systematic use of sound to encode meaning in any spoken human language, or the field of linguistics studying that use.[8] In these definitions of phonology, the apparent exclusion of sign languages and suprasegmental units (e.g. tone) reflects the relatively small amount of attention that has been given to data other than speech sounds. Despite this attentional bias, all modern linguists consider sign languages and tone to be within the domain of phonological study.

History[edit]

The earliest evidence for a systematic study of the sounds in a language appears in the 4th century BCE Ashtadhyayi, a Sanskrit grammar composed by Pāṇini. In particular, the Shiva Sutras, an auxiliary text to the Ashtadhyayi, introduces what may be considered a list of the phonemes of Sanskrit, with a notational system for them that is used throughout the main text, which deals with matters of morphology, syntax and semantics. Another notable scholar in pre-modern times was Ibn Jinni of Mosul. He was a pioneer in phonology and wrote prolifically in the 10th century on Arabic morphology and phonology in works such as Kitāb Al-Munṣif, Kitāb Al-Muḥtasab, and Kitāb Al-Khaṣāʾiṣ [ar].[9]

The study of phonology as it exists today is defined by the formative studies of the 19th-century Polish scholar Jan Baudouin de Courtenay,[10]: 17  who (together with his students Mikołaj Kruszewski and Lev Shcherba in the Kazan School) shaped the modern usage of the term phoneme in a series of lectures in 1876–1877. The word phoneme had been coined a few years earlier, in 1873, by the French linguist A. Dufriche-Desgenettes. In a paper read at 24 May meeting of the Société de Linguistique de Paris,[11] Dufriche-Desgenettes proposed for phoneme to serve as a one-word equivalent for the German Sprachlaut.[12] Baudouin de Courtenay's subsequent work, though often unacknowledged, is considered to be the starting point of modern phonology. He also worked on the theory of phonetic alternations (what is now called allophony and morphophonology) and may have had an influence on the work of Saussure, according to E. F. K. Koerner.[13]

Nikolai Trubetzkoy, 1920s

An influential school of phonology in the interwar period was the Prague school. One of its leading members was Prince Nikolai Trubetzkoy, whose Grundzüge der Phonologie (Principles of Phonology),[7] published posthumously in 1939, is among the most important works in the field from that period. Directly influenced by Baudouin de Courtenay, Trubetzkoy is considered the founder of morphophonology, but the concept had also been recognized by de Courtenay. Trubetzkoy also developed the concept of the archiphoneme. Another important figure in the Prague school was Roman Jakobson, one of the most prominent linguists of the 20th century. Louis Hjelmslev's glossematics also contributed with a focus on linguistic structure independent of phonetic realization or semantics.[10]: 175 

In 1968, Noam Chomsky and Morris Halle published The Sound Pattern of English (SPE),[14] the basis for generative phonology. In that view, phonology is part of Universal Grammar and essentially an ordered list of phonological rules that transform an underlying form into a surface form. The underlying representations are sequences of segments that have an internal structure consisting of distinctive features. The features were an expansion of earlier work by Roman Jakobson, Gunnar Fant, and Morris Halle.[15] Each feature in SPE encodes an aspect of articulation or perception, such as vowel height or nasality, and its presence or absence is expressed with the binary values + or -, e.g. [+nasal]. Like all major principles of phonology, features in SPE were assumed to be universal and innate. The ordered rules, then, made reference to the feature values. For example, a voicing assimilation rule targeting voiceless sounds would transform underlying [-voice] into surface [+voice]. In this way, the grammar "generated" the appropriate surface form by means of the rule set. Generative phonology thus led phonologists to focus on capturing phonological processes and derivation with features and rules, and to order these rules. Furthermore, the generativists folded morphophonology into phonology by allowing rules to be applicable only in certain contexts, which both solved and created problems. As such, the generative approach downplayed the importance of the syllable as a unit and emphasised the segment.

In 1976, John Goldsmith introduced autosegmental phonology.[16] Based on the phonological behaviour of tones, Goldsmith proposed that tones are not bound to the internal structure of segments, but rather that they are autonomous from segments (hence the theory's name) and exist on a separate tier: the tonal tier. In the representation, tones are then connected to segments via association lines. In this way, it is easy to represent one-to-many and many-to-one associations between tones on the tonal tier and segments on the segmental tier. This is useful for representing, among others, tonal spreading and (derived) contour tones. The concept of autonomous tiers was subsequently also applied to features. Until then, sequences of segments had been conceptualised as existing on a single linear string, but with autosegmental phonology, features could be represented on multiple tiers that were separate from the positional slots that they can associate to. A significant consequence of this is that certain processes could suddenly easily be represented as being local. An example is vowel harmony. Namely, from a strictly linear perspective, the vowels in a CVCV sequence are not adjacent, but they are adjacent in a representation where the vowel features exist on a tier that is separate from the consonant features. Eventually, autosegmental phonology led to feature geometries.[17] In feature geometries, features are organised in geometrical structures that have major nodes such as "Place of Articulation" and "Laryngeal" that each contain several features in their branches. Grouping features together into nodes crucially allows nodes to spread in their entirety rather than feature-by-feature. Additionally, nodes can dominate each other, allowing for complex geometries that make specific predictions about what phonological processes are possible.

During the 80's, two frameworks of phonology developed independently that have much in common: Dependency phonology and Government phonology. The impetus behind Dependency Phonology (DP), mostly developed by John Anderson, was the fundamental idea that a relation between linguistic units is asymmetrical with a dominating component (the head) and a dominated component (the dependent).[18] Government Phonology (GP), of which prominent figures include Jonathan Kaye, Jean Lowenstamm, Jean-Roger Vergnaud, Monik Charette, and John Harris, developed out of research into the internal structure of segments in the autosegmental era, and took inspiration from Government and Binding Syntax.[19] Both DP and GP attempt to bridge the gap between syntax and phonology. For DP, this stems from Anderson's Structural Analogy Assumption, which proposes that it is desirable as a null-hypothesis that the same mechanics operate in different parts of the grammar.[20] As such, DP uses notions such as complements and adjuncts. GP, on the other hand, being directly based on syntax, naturally employs several mechanisms that are analogous to syntactic operations, such as Proper Government and the Minimality Condition. A second similarity is that both frameworks reject syllabic constituents, DP through Head-Dependency Relations and rejecting contentless nodes, and GP through assuming lateral relations between segments in a string.[21] The result is that both frameworks reject e.g. an Onset node that can branch, so that analyses of syllable structure are alike. A third important similarity between DP and GP is the type of phonological primes that they use as subsegmental building blocks, pioneered by DP. These are most commonly known as "elements" (though they are called "components" in DP). The use of elements to represent subsegmental structure, as opposed to the more widely used distinctive features, is technically also possible outside of DP and GP and, in fact, Element Theory has developed into a self-contained theory.[22] Elements are different from features in a number of ways. First, they are monovalent/unary, i.e. either present or fully absent in a represenation, such that phonological processes cannot make reference to the lack of an element. Second, elements have multiple phonetic realisations that depend on their headedness status (which is similar but not identical between DP and GP). Third, formal definitions of elements are based on acoustics rather than on articulation/perception. Fourth, consonants and vowels are always represented by the same set of elements. Despite the common ground in terms of structural analogies between syntax and phonology, syllable structure, and subsegmental structure, there is one major difference that sets DP and GP apart: the importance of phonetic grounding. DP is substance-based, meaning that any phonological entity must bear some relation to its phonetic realisation and cannot be "empty". In stark opposition of this, GP assumes that only phonological behaviour can be used as evidence to support hypotheses about phonological structure. Phonology and phonetics are seen as separate modules which entails that phonological structure needs transduced or "translated" into phonetic structure. As such, the relationship between phonology and phonetics can, in principle, be as arbitrary and language-specific as the relationship between a phonological form and its lexical meaning. This means that phonological units such as elements have a very liberal phonetic implementation.

In 1987, a small conference was held at the Ohio State University that would result in the launch of a new approach to doing phonological research: Laboratory Phonology. Laboratory Phonology is essentially the enterprise of addressing phonological questions through experimental work.[23] Throughout most of the 20th century, phonetics and phonology diverged as branches of linguistics. Phonetic mechanisms were assumed to be universal and gradient, whereas phonology was assumed to be language-specific (hence acquired) and categorical. Furthermore, phonology within generative linguistics was also predominantly substance-free. However, increasingly advanced technology facilitated phonetic research, and linguists came to understand that phonetics is also language-specific and that there is at least some gradience within phonology (e.g. incomplete neutralisation). Laboratory Phonology was thus intended to bring phonetics and phonology under one roof again. The fundamental questions it poses are how cognitive representations are mapped onto physical motoric functions, what the division of labour is between phonetics and phonology, and which methods are appropriate to study them. An example of research within Laboratory Phonology would be using electroencephalography in a perception experiment to make inferences about the featural specification or lack thereof in segments.

Concurrent with Laboratory Phonology as an approach to phonology came the inception of Articulatory Phonology,[24] which developed in the same historical context. But whereas Laboratory Phonology is a theory-neutral approach, Articulatory Phonology, developed by Catherine Browman and Louis Goldstein, was a novel theory about the internal structure of segments.[25] Instead of segments having primes (features or elements), there are only articulatory "gestures" such as "closed velum" or "protruded lips". Gestures are coordinated in a certain way so that they overlap or are crucially sequential, thereby creating the illusion of segments, which have no status in Articulatory Phonology. In visualisations called "gstural scores", the phonological specification of gestures is shown throughout time as bars along a horizontal axis. Importantly, a gestural specification denotes an abstract articulatory goal, and is not itself a motoric event, nor does the goal need to be attained at all times. A gestural representation of speech leads to unique analyses in which e.g. assimilation can be directly modelled as gestural overlap, and can straightforwardly explain certain alternations that make no sense from the perspective of segment-internal phonological primes. This reduces complexity of the (morpho)phonology. Furthermore, given the importance accorded to gestures, Articulatory Phonology has focussed much on the temporal organisation and coordination between the movements of the articulators. Results in this line of research have interesting implications for syllable structure in particular.

In a course at the LSA summer institute in 1991, Alan Prince and Paul Smolensky developed optimality theory (OT), an architecture for the computation of phonology that is couched within a more general theory of the relationship between brain and mind.[26] In stark contrast to traditional generative phonology and its ordered rules, which was the dominant view of phonology up and until the 90's, OT proposed that phonology changes an underlying form (the "input") by selecting an optimal pronunciation for it (the "output"). Which pronunciation is optimal is determined by evaluating how badly each of the theoretically infinite possible pronunciations (the "candidates") violates a set of constraints. Crucially, each of the constraints is in principle violable, but any given constraint is more important than the combination of all lower-ranked constraints, so that the optimal candidate will be the one that satisfies a higher-ranked constraint than other candidates regardless of the optimal candidates' other violations. Classic OT executes this evaluation process in parallel, meaning that the computation cannot evaluate intermediate steps in the derivation. This is diametrically opposed to the step-by-step derivations of generative phonology where the output of one rule can be the input of a rule that is ordered after it. However, there exist versions of OT that involve serial processing, i.e. multiple consecutive evaluations.[27][28] Constraints were originally asserted to be universal, so that the only difference between languages was the ordered "ranking" in which the constraints were ranked. This was compatible with universal grammar. The OT approach was soon extended to morphology by John McCarthy and Alan Prince and has become a dominant trend in phonology.

Computational Phonology

An integrated approach to phonological theory that combines synchronic and diachronic accounts to sound patterns was initiated with Evolutionary Phonology in recent years.[29]

Topics[edit]

Phonemes[edit]

One of the core tasks of the phonologist is to create an analysis of the phonemic inventory of a language. This is sometimes the first step in a phonological analysis, because phonemes are the building blocks of syllables, have features as their own building blocks, undergo phonological processes, and are the carriers of all suprasegmental properties of speech such as stress. A simple disagnostic for proving phonemehood is to find words that differ in meaning and phonetically differ in only one speech sound, i.e. minimal pairs. When a minimal pair is found, it is proof that the different speech sounds belong to different phonemes. Speech sounds for which no minimal pairs can be found, may be allophones of the same phoneme. In this case, one might observe that the allophones are in complementary distribution, meaning that one can only appear in phonological contexts where the other cannot, and vice versa. The allophone that appears in the most diverse environment is then taken to be the unconditioned allophone.

To find minimal pairs or establish the lack thereof, a phonologist needs a data set of accurately transcribed words, or they could attempt to elicit minimal pairs from a native speaker. How straightforward it is to establish minimal pairs will depend on several language-specific factors. If a language has many phonological processes, relationships between the underlying form and surface form will be obscured, so that a thorough examination of these processes needs to precede a definitive phonemic analysis. Otherwise, what appears to be a minimal pair on the phonological surface can be mistaken for an underlying contrast.[30]

There are several additional analytical complications that may arise. First, the absence of a minimal pair does not always prove that two speech sounds must belong to the same phoneme. If speech sounds are in complementary distribution and thus have no minimal pairs, then they are usually still considered different phonemes if they are phonetically very different. This is the case, for example, for /h/ and /ŋ/ in German.[31] But there do not exists any criteria for how phonetically distinct two sounds have to be in order to determine that they are different phonemes so that there are controversial cases such as Standard Mandarin /i/, which is in complementary distribution sounds that could be transcribed as [ɹ̺] and [ɻ].[32] It can also happen that two sounds are not in complementary distribution, do not have any minimal pairs to distinguish them, and yet are not interchangeable. This is the case for Dutch /ɣ/ (for speakers who have this sound). It does not contrast with its voiceless counterpart /x/, but both sounds occur word-initially and intervocalically without being predictable.[SOURCE] A second problem for phonemic analysis is that it is not always clear which allophone is conditioned and which one must be taken as underlying. [EXAMPLE]. Thirdly, loanwords may introduce new speech sounds or new phonotactic structures to the language, and there are no truly objective means to decide when loans must be accepted as being part of the sound system.

Despite the crucial role that phonemes have played since their conceptualisation, there is no complete consensus on whether phonemes are a convenient descriptive tool for linguists, or whether they are actual cognitive units that should have a place in formal theory. A framework in which phonemes are considered to be epiphonema is Articulatory Phonology[33], where phonemes/segments are created through the implementation of gestures that are not contained within or associated to phonemes in the way that features are. It has also been claimed that, when taking the logic of Autosegmental Phonology to its logical endpoint, segments are only anchoring points for features on a timing tier, so that there are no phonemes in the classical sense.[SOURCE] Neurocognitive research has likewise produced mixed results, with some studies[SOURCE] supporting the existence of phonemes whereas others find no evidence.[SOURCE]

Phonological and morphophonological processes[edit]

Features[edit]

Tone[edit]

Stress[edit]

Intonation[edit]

Syllable structure and phonotactics[edit]

Diachronic/historical phonology[edit]

Main approaches to representation[edit]

Phonology in sign languages[edit]

The principles of phonological analysis can be applied independently of modality because they are designed to serve as general analytical tools, not speech-specific ones. The same principles have been applied to the analysis of sign languages (see Phonemes in sign languages), even though the sublexical units are not instantiated as speech sounds.

Theoretical frameworks in Phonology[edit]

  • Autosegmental Phonology
  • Element Theory: an approach to subsegmental phonology that assumes that the building blocks of speech sounds and tones are acoustic elements.[34]
  • Exemplar Theory
  • Generative Phonology

See also[edit]

Notes[edit]

  1. ^ Peng, Long (2013). Analyzing Sound Patterns: An Introduction to Phonology. Cambridge: Cambridge University Press. ISBN 978-0-521-19579-9.
  2. ^ Brentari, Diane; Fenlon, Jordan; Cormier, Kearsy (July 2018). "Sign Language Phonology". Oxford Research Encyclopedia of Linguistics. doi:10.1093/acrefore/9780199384655.013.117. ISBN 9780199384655. S2CID 60752232.
  3. ^ Stokoe, William C. (1978) [1960]. Sign Language Structure: An outline of the visual communication systems of the American deaf. Department of Anthropology and Linguistics, University at Buffalo. Studies in linguistics, Occasional papers. Vol. 8 (2nd ed.). Silver Spring, MD: Linstok Press.
  4. ^ "Definition of PHONOLOGY". www.merriam-webster.com. Retrieved 3 January 2022.
  5. ^ a b Lass, Roger (1998). Phonology: An Introduction to Basic Concepts. Cambridge, UK; New York; Melbourne, Australia: Cambridge University Press. p. 1. ISBN 978-0-521-23728-4. Retrieved 8 January 2011Paperback ISBN 0-521-28183-0{{cite book}}: CS1 maint: postscript (link)
  6. ^ Carr, Philip (2003). English Phonetics and Phonology: An Introduction. Massachusetts, USA; Oxford, UK; Victoria, Australia; Berlin, Germany: Blackwell Publishing. ISBN 978-0-631-19775-1. Retrieved 8 January 2011Paperback ISBN 0-631-19776-1{{cite book}}: CS1 maint: postscript (link)
  7. ^ a b Trubetzkoy N., Grundzüge der Phonologie (published 1939), translated by C. Baltaxe as Principles of Phonology, University of California Press, 1969
  8. ^ Clark, John; Yallop, Colin; Fletcher, Janet (2007). An Introduction to Phonetics and Phonology (3rd ed.). Massachusetts, USA; Oxford, UK; Victoria, Australia: Blackwell Publishing. ISBN 978-1-4051-3083-7. Retrieved 8 January 2011Alternative ISBN 1-4051-3083-0{{cite book}}: CS1 maint: postscript (link)
  9. ^ Bernards, Monique, "Ibn Jinnī", in: Encyclopaedia of Islam, THREE, Edited by: Kate Fleet, Gudrun Krämer, Denis Matringe, John Nawas, Everett Rowson. Consulted online on 27 May 2021 First published online: 2021 First print edition: 9789004435964, 20210701, 2021-4
  10. ^ a b Anderson, Stephen R. (2021). Phonology in the twentieth century (Second, revised and expanded ed.). Berlin: Language Science Press. doi:10.5281/zenodo.5509618. ISBN 978-3-96110-327-0. ISSN 2629-172X. Retrieved 28 December 2021.
  11. ^ Anon (probably Louis Havet). (1873) "Sur la nature des consonnes nasales". Revue critique d'histoire et de littérature 13, No. 23, p. 368.
  12. ^ Roman Jakobson, Selected Writings: Word and Language, Volume 2, Walter de Gruyter, 1971, p. 396.
  13. ^ E. F. K. Koerner, Ferdinand de Saussure: Origin and Development of His Linguistic Thought in Western Studies of Language. A contribution to the history and theory of linguistics, Braunschweig: Friedrich Vieweg & Sohn [Oxford & Elmsford, N.Y.: Pergamon Press], 1973.
  14. ^ Chomsky, Noam; Halle, Morris (1968). The Sound Pattern of English. New York: Harper & Row.
  15. ^ Jakobson, Roman; Fant, Gunnar; Halle, Morris (1952). Preliminaries to Speech Analysis. Cambridge, MA: MIT Press.
  16. ^ Goldsmith, John A. (1976). Autosegmental Phonology (PhD thesis). MIT.
  17. ^ Clements, George N. (1985). "The geometry of phonological features". Phonology Yearbook. 2: 225–252.
  18. ^ van der Hulst, Harry; van de Weijer, Jeroen (2018). "Dependency Phonology". In Hannahs, S. J.; Bosch, Anna R. K. (eds.). The Routledge Handbook of Phonological Theory. Abingdon: Routledge. pp. 325–359. ISBN 978-1-315-67542-8.
  19. ^ Scheer, Tobias; Kula, Nancy C. (2018). "Government Phonology: Element Theory, conceptual issues, and introduction". In Hannahs, S. J.; Bosch, Anna R. K. (eds.). The Routledge Handbook of Phonological Theory. Abingdon: Routledge. pp. 226–261. ISBN 978-1-315-67542-8.
  20. ^ Anderson, John M. (1987). "The tradition of structural analogy". In Steele, R.; Threadgold, T. (eds.). Language topics: Essays in honour of Michael Halliday. Amsterdam: John Benjamins. pp. 33–43.
  21. ^ Scheer, Tobias; Cyran, Eugeniusz (2018). "Syllable structure in Government Phonology". In Hannahs, S. J.; Bosch, Anna R. K. (eds.). The Routledge Handbook of Phonological Theory. Abingdon: Routledge. pp. 262–292. ISBN 978-1-315-67542-8.
  22. ^ Backley, Phillip (2011). An Introduction to Element Theory. Edinburgh University Press. ISBN 0748637427.
  23. ^ Cohn, Abigail C.; Fougeron, Cécile; Huffman, Marie K. (2018). "Laboratory phonology". In Hannahs, S. J.; Bosch, Anna R. K. (eds.). The Routledge Handbook of Phonological Theory. Abingdon: Routledge. pp. 504–229. ISBN 978-1-315-67542-8.
  24. ^ Browman, Catherine P.; Goldstein, Louis M. (1986). "Towards an articulatory phonology". Phonology. 3 (1): 219-252.
  25. ^ Hall, Nancy (2018). "Articulatory Phonology". In Hannahs, S. J.; Bosch, Anna R. K. (eds.). The Routledge Handbook of Phonological Theory. Abingdon: Routledge. pp. 530–552. ISBN 978-1-315-67542-8.
  26. ^ Smolensky, Paul; Legendre, Géraldine (2006). The Harmonic Mind. Cambridge, MA: MIT Press. ISBN 978-0-262-19526-3.
  27. ^ McCarthy, John J. (2010). "An Introduction to Harmonic Serialism". Language and Linguistics Compass. 4 (10): 1001–1018.
  28. ^ Bermúdez-Otero, Ricardo (2018). "Stratal Phonology". In Hannahs, S. J.; Bosch, Anna R. K. (eds.). The Routledge Handbook of Phonological Theory. Abingdon: Routledge. pp. 100–134. ISBN 113802581X.
  29. ^ Blevins, Juliette. 2004. Evolutionary phonology: The emergence of sound patterns. Cambridge University Press.
  30. ^ Snider, Keith (2014). "On Establishing Underlying Tonal Contrast". Language Documentation & Conservation. 8: 707-737.
  31. ^ Krämer, Martin (2012). Underlying Representations. Cambridge: Cambridge University Press. p. 18. ISBN 0521192773.
  32. ^ Duanmu, San (2007). The Phonology of Standard Chinese (2 ed.). Oxford University Press. ISBN 978-0-19-921578-2.
  33. ^ Browman, Catherine P.; Goldstein, Louis M. (1986). "Towards an articulatory phonology". Phonology. 3 (1): 219-252.
  34. ^ Backley, Phillip (2011). An Introduction to Element Theory. Edinburgh: Edinburgh University Press. ISBN 0748637435.

Cite error: A list-defined reference named "HaleReiss2008" is not used in the content (see the help page).

Cite error: A list-defined reference named "HaleReiss2000" is not used in the content (see the help page).

Bibliography[edit]

  • Anderson, John M.; and Ewen, Colin J. (1987). Principles of dependency phonology. Cambridge: Cambridge University Press.
  • Bloch, Bernard (1941). "Phonemic overlapping". American Speech. 16 (4): 278–284. doi:10.2307/486567. JSTOR 486567.
  • Bloomfield, Leonard. (1933). Language. New York: H. Holt and Company. (Revised version of Bloomfield's 1914 An introduction to the study of language).
  • Brentari, Diane (1998). A prosodic model of sign language phonology. Cambridge, MA: MIT Press.
  • Chomsky, Noam. (1964). Current issues in linguistic theory. In J. A. Fodor and J. J. Katz (Eds.), The structure of language: Readings in the philosophy language (pp. 91–112). Englewood Cliffs, NJ: Prentice-Hall.
  • Chomsky, Noam; and Halle, Morris. (1968). The sound pattern of English. New York: Harper & Row.
  • Clements, George N. (1985). "The geometry of phonological features". Phonology Yearbook. 2: 225–252. doi:10.1017/S0952675700000440. S2CID 62237665.
  • Clements, George N.; and Samuel J. Keyser. (1983). CV phonology: A generative theory of the syllable. Linguistic inquiry monographs (No. 9). Cambridge, MA: MIT Press. ISBN 0-262-53047-3 (pbk); ISBN 0-262-03098-5 (hbk).
  • de Lacy, Paul, ed. (2007). The Cambridge Handbook of Phonology. Cambridge University Press. ISBN 978-0-521-84879-4. Retrieved 8 January 2011.
  • Donegan, Patricia. (1985). On the Natural Phonology of Vowels. New York: Garland. ISBN 0-8240-5424-5.
  • Firth, J. R. (1948). "Sounds and prosodies". Transactions of the Philological Society. 47 (1): 127–152. doi:10.1111/j.1467-968X.1948.tb00556.x.
  • Gilbers, Dicky; de Hoop, Helen (1998). "Conflicting constraints: An introduction to optimality theory". Lingua. 104 (1–2): 1–12. doi:10.1016/S0024-3841(97)00021-1.
  • Goldsmith, John A. (1979). The aims of autosegmental phonology. In D. A. Dinnsen (Ed.), Current approaches to phonological theory (pp. 202–222). Bloomington: Indiana University Press.
  • Goldsmith, John A. (1989). Autosegmental and metrical phonology: A new synthesis. Oxford: Basil Blackwell.
  • Goldsmith, John A. (1995). "Phonological Theory". In John A. Goldsmith (ed.). The Handbook of Phonological Theory. Blackwell Handbooks in Linguistics. Blackwell Publishers. ISBN 978-1-4051-5768-1.
  • Gussenhoven, Carlos & Jacobs, Haike. "Understanding Phonology", Hodder & Arnold, 1998. 2nd edition 2005.
  • Hale, Mark; Reiss, Charles (2008). The Phonological Enterprise. Oxford, UK: Oxford University Press. ISBN 978-0-19-953397-8.
  • Halle, Morris (1954). "The strategy of phonemics". Word. 10 (2–3): 197–209. doi:10.1080/00437956.1954.11659523.
  • Halle, Morris. (1959). The sound pattern of Russian. The Hague: Mouton.
  • Harris, Zellig. (1951). Methods in structural linguistics. Chicago: Chicago University Press.
  • Hockett, Charles F. (1955). A manual of phonology. Indiana University publications in anthropology and linguistics, memoirs II. Baltimore: Waverley Press.
  • Hooper, Joan B. (1976). An introduction to natural generative phonology. New York: Academic Press. ISBN 9780123547507.
  • Jakobson, Roman (1949). "On the identification of phonemic entities". Travaux du Cercle Linguistique de Copenhague. 5: 205–213. doi:10.1080/01050206.1949.10416304.
  • Jakobson, Roman; Fant, Gunnar; and Halle, Morris. (1952). Preliminaries to speech analysis: The distinctive features and their correlates. Cambridge, MA: MIT Press.
  • Kaisse, Ellen M.; and Shaw, Patricia A. (1985). On the theory of lexical phonology. In E. Colin and J. Anderson (Eds.), Phonology Yearbook 2 (pp. 1–30).
  • Kenstowicz, Michael. (1994). Phonology in generative grammar. Oxford: Basil Blackwell.
  • Ladefoged, Peter. (1982). A course in phonetics (2nd ed.). London: Harcourt Brace Jovanovich.
  • Martinet, André (1949). Phonology as functional phonetics. Oxford: Blackwell.
  • Martinet, André (1955). Économie des changements phonétiques: Traité de phonologie diachronique. Berne: A. Francke S.A.
  • Napoli, Donna Jo (1996). Linguistics: An Introduction. New York: Oxford University Press.
  • Pike, Kenneth Lee (1947). Phonemics: A technique for reducing languages to writing. Ann Arbor: University of Michigan Press.
  • Sandler, Wendy and Lillo-Martin, Diane. (2006). Sign language and linguistic universals. Cambridge: Cambridge University Press
  • Sapir, Edward (1925). "Sound patterns in language". Language. 1 (2): 37–51. doi:10.2307/409004. JSTOR 409004.
  • Sapir, Edward (1933). "La réalité psychologique des phonémes". Journal de Psychologie Normale et Pathologique. 30: 247–265.
  • de Saussure, Ferdinand. (1916). Cours de linguistique générale. Paris: Payot.
  • Stampe, David. (1979). A dissertation on natural phonology. New York: Garland.
  • Swadesh, Morris (1934). "The phonemic principle". Language. 10 (2): 117–129. doi:10.2307/409603. JSTOR 409603.
  • Trager, George L.; Bloch, Bernard (1941). "The syllabic phonemes of English". Language. 17 (3): 223–246. doi:10.2307/409203. JSTOR 409203.
  • Trubetzkoy, Nikolai. (1939). Grundzüge der Phonologie. Travaux du Cercle Linguistique de Prague 7.
  • Twaddell, William F. (1935). On defining the phoneme. Language monograph no. 16. Language.

External links[edit]


+