This article includes a list of references, related reading or external links, but its sources remain unclear because it lacks inline citations. (July 2015) (Learn how and when to remove this template message)
In computational linguistics, lexical density constitutes the estimated measure of content per functional (grammatical) and lexical units (lexemes) in total. It is used in discourse analysis as a descriptive parameter which varies with register and genre. Spoken texts tend to have a lower lexical density than written ones, for example.
Lexical density may be determined thus:
= the analysed text's lexical density
= the number of lexical word tokens (nouns, adjectives, verbs, adverbs) in the analysed text
= the number of all tokens (total number of words) in the analysed text
(The variable symbols applied herein are by no means conventional; they were arbitrarily chosen for the nonce to illustrate the example in question.)
- Ure, J (1971). Lexical density and register differentiation. In G. Perren and J.L.M. Trim (eds), Applications of Linguistics, London: Cambridge University Press. 443-452.
|This computational linguistics-related article is a stub. You can help Wikipedia by expanding it.|