Distributional semantics is a research area that develops and studies theories and methods for quantifying and categorizing semantic similarities between linguistic items based on their distributional properties in large samples of language data. The basic idea of distributional semantics can be summed up in the so-called Distributional hypothesis: linguistic items with similar distributions have similar meanings.
The Distributional Hypothesis in linguistics is the theory that words that occur in the same contexts tend to have similar meanings. The underlying idea that "a word is characterized by the company it keeps" was popularized by Firth. The Distributional Hypothesis is the basis for Statistical Semantics. Although the Distributional Hypothesis originated in Linguistics, it is now receiving attention in Cognitive Science especially regarding the context of word use.
In recent years, the distributional hypothesis has provided the basis for the theory of similarity-based generalization in language learning: the idea that children can figure out how to use words they've rarely encountered before by generalizing about their use from distributions of similar words. The distributional hypothesis suggests that the more semantically similar two words are, the more distributionally similar they will be in turn, and thus the more that they will tend to occur in similar linguistic contexts. Whether or not this suggestion holds has significant implications for both the data-sparsity problem in computational modeling, and for the question of how children are able to learn language so rapidly given relatively impoverished input (this is also known as the problem of the poverty of the stimulus).
Distributional semantic modeling
Distributional semantics favor the use of linear algebra as computational tool and representational framework. The basic approach is to collect distributional information in high-dimensional vectors, and to define distributional/semantic similarity in terms of vector similarity. Different kinds of similarities can be extracted depending on which type of distributional information is used to collect the vectors: topical similarities can be extracted by populating the vectors with information on which text regions the linguistic items occur in; paradigmatic similarities can be extracted by populating the vectors with information on which other linguistic items the items co-occur with. Note that the latter type of vectors can also be used to extract syntagmatic similarities by looking at the individual vector components.
The basic idea of a correlation between distributional and semantic similarity can be operationalized in many different ways. There is a rich fauna of computational models implementing distributional semantics, including Latent semantic analysis (LSA), Hyperspace Analogue to Language (HAL), syntax- or dependency-based models, Random indexing, and various variants of the Topic model.
Distributional semantic models differ primarily with respect to the following parameters:
- Context type (text regions vs. linguistic items)
- Context window (size, extension, etc.)
- Frequency weighting (e.g. Entropy, Pointwise mutual information, etc.)
- Dimension reduction (e.g. Random indexing, Singular value decomposition, etc.)
- Similarity measure (e.g. Cosine similarity, Minkowski distance, etc.)
- Harris, Z. (1954). "Distributional structure". Word 10 (23): 146–162.
- Firth, J.R. (1957). A synopsis of linguistic theory 1930-1955. In Studies in Linguistic Analysis, pp. 1-32. Oxford: Philological Society. Reprinted in F.R. Palmer (ed.), Selected Papers of J.R. Firth 1952-1959, London: Longman (1968).
- Sahlgren, Magnus (2008). "The Distributional Hypothesis". Rivista di Linguistica 20 (1): 33–53.
- McDonald, S.; Ramscar, M. (2001). "Testing the distributional hypothesis: The influence of context on judgements of semantic similarity". Proceedings of the 23rd Annual Conference of the Cognitive Science Society. pp. 611–616. CiteSeerX: 10.1.1.104.7535.
- Gleitman, Lila R. 2002. “Verbs of a feather flock together II: The child's discovery of words and their meanings”. In B. Nevin & S. B. Johnson, eds., The Legacy of Zellig Harris: Language and information into the 21st century, vol. 1: Philosophy of science, syntax and semantics. Current issues in Linguistic Theory 228, pp. 209–229. John Benjamins Publishing Company.
- Yarlett, D. (2008). Language Learning Through Similarity-Based Generalization (PhD thesis). Stanford University.
- Deerwester 1990
- Padó 2007
- Schütze 1993
- Sahlgren 2006
- Scott Deerwester, Susan T. Dumais, George W. Furnas, Thomas K. Landauer, Richard Harshman (1990). "Indexing by Latent Semantic Analysis" (PDF). Journal of the American Society for Information Science 41 (6): 391–407. doi:10.1002/(SICI)1097-4571(199009)41:6<391::AID-ASI1>3.0.CO;2-9.
- Thomas Landauer, Susan T. Dumais. "A Solution to Plato's Problem: The Latent Semantic Analysis Theory of Acquisition, Induction, and Representation of Knowledge". Retrieved 2007-07-02.
- Kevin Lund, Curt Burgess, Ruth Ann Atchley (1995). "Semantic and associative priming in a high-dimensional semantic space". Cognitive Science Proceedings. pp. 660–665.
- Keving Lund, Curt Burgess (1996). "Producing high-dimensional semantic spaces from lexical co-occurrence". Behavior Research Methods, Instruments & Computers 28 (2): 203–208.
- Padó, Sebastian; Lapata, Mirella (2007). "Dependency-based construction of semantic space models". Computational Linguistics 33 (2): 161–199.
- Sahlgren, Magnus (2006). The Word-Space Model (PhD thesis). Stockholm University.
- Schütze, Hinrich (1993). "Word Space". Advances in Neural Information Processing Systems 5. pp. 895–902. CiteSeerX: 10.1.1.41.8856.
- Zellig S. Harris; Columbia University