Jump to content

Language model

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Citation bot (talk | contribs) at 15:24, 23 January 2023 (Add: arxiv, authors 1-1. Removed parameters. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by AManWithNoPlan | #UCB_toolbar). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

A language model is a probability distribution over sequences of words.[1] Given any sequence of words of length m, a language model assigns a probability to the whole sequence. Language models generate probabilities by training on text corpora in one or many languages. Given that languages can be used to express an infinite variety of valid sentences (the property of digital infinity), language modeling faces the problem of assigning non-zero probabilities to linguistically valid sequences that may never be encountered in the training data. Several modelling approaches have been designed to surmount this problem, such as applying the Markov assumption or using neural architectures such as recurrent neural networks or transformers.

Language models are useful for a variety of problems in computational linguistics; from initial applications in speech recognition[2] to ensure nonsensical (i.e. low-probability) word sequences are not predicted, to wider use in machine translation[3] (e.g. scoring candidate translations), natural language generation (generating more human-like text), part-of-speech tagging, parsing,[3] Optical Character Recognition, handwriting recognition,[4] grammar induction,[5] information retrieval,[6][7] and other applications.

Language models are used in information retrieval in the query likelihood model. There, a separate language model is associated with each document in a collection. Documents are ranked based on the probability of the query Q in the document's language model : . Commonly, the unigram language model is used for this purpose.

Model types

Unigram

A unigram model can be treated as the combination of several one-state finite automata.[8] It assumes that the probabilities of tokens in a sequence are independent, e.g.:

In this model, the probability of each word only depends on that word's own probability in the document, so we only have one-state finite automata as units. The automaton itself has a probability distribution over the entire vocabulary of the model, summing to 1. The following is an illustration of a unigram model of a document.

Terms Probability in doc
a 0.1
world 0.2
likes 0.05
we 0.05
share 0.3
... ...

The probability generated for a specific query is calculated as

Different documents have unigram models, with different hit probabilities of words in it. The probability distributions from different documents are used to generate hit probabilities for each query. Documents can be ranked for a query according to the probabilities. Example of unigram models of two documents:

Terms Probability in Doc1 Probability in Doc2
a 0.1 0.3
world 0.2 0.1
likes 0.05 0.03
we 0.05 0.02
share 0.3 0.2
... ... ...

In information retrieval contexts, unigram language models are often smoothed to avoid instances where P(term) = 0. A common approach is to generate a maximum-likelihood model for the entire collection and linearly interpolate the collection model with a maximum-likelihood model for each document to smooth the model.[9]

n-gram

In an n-gram model, the probability of observing the sentence is approximated as

It is assumed that the probability of observing the ith word wi in the context history of the preceding i − 1 words can be approximated by the probability of observing it in the shortened context history of the preceding n − 1 words (nth order Markov property). To clarify, for n=3 and i=2 we have .

The conditional probability can be calculated from n-gram model frequency counts:

The terms bigram and trigram language models denote n-gram models with n = 2 and n = 3, respectively.[10]

Typically, the n-gram model probabilities are not derived directly from frequency counts, because models derived this way have severe problems when confronted with any n-grams that have not been explicitly seen before. Instead, some form of smoothing is necessary, assigning some of the total probability mass to unseen words or n-grams. Various methods are used, from simple "add-one" smoothing (assign a count of 1 to unseen n-grams, as an uninformative prior) to more sophisticated models, such as Good-Turing discounting or back-off models.

Bidirectional

Bidirectional representations condition on both pre- and post- context (e.g., words) in all layers.[11]

Example

In a bigram (n = 2) language model, the probability of the sentence I saw the red house is approximated as

whereas in a trigram (n = 3) language model, the approximation is

Note that the context of the first n – 1 n-grams is filled with start-of-sentence markers, typically denoted <s>.

Additionally, without an end-of-sentence marker, the probability of an ungrammatical sequence *I saw the would always be higher than that of the longer sentence I saw the red house.

Exponential

Maximum entropy language models encode the relationship between a word and the n-gram history using feature functions. The equation is

where is the partition function, is the parameter vector, and is the feature function. In the simplest case, the feature function is just an indicator of the presence of a certain n-gram. It is helpful to use a prior on or some form of regularization.

The log-bilinear model is another example of an exponential language model.

Neural network

Neural language models (or continuous space language models) use continuous representations or embeddings of words to make their predictions.[12] These models make use of Neural networks.

Continuous space embeddings help to alleviate the curse of dimensionality in language modeling: as language models are trained on larger and larger texts, the number of unique words (the vocabulary) increases.[a] The number of possible sequences of words increases exponentially with the size of the vocabulary, causing a data sparsity problem because of the exponentially many sequences. Thus, statistics are needed to properly estimate probabilities. Neural networks avoid this problem by representing words in a distributed way, as non-linear combinations of weights in a neural net.[13] An alternate description is that a neural net approximates the language function. The neural net architecture might be feed-forward or recurrent, and while the former is simpler the latter is more common.[example needed][citation needed]

Typically, neural net language models are constructed and trained as probabilistic classifiers that learn to predict a probability distribution

.

I.e., the network is trained to predict a probability distribution over the vocabulary, given some linguistic context. This is done using standard neural net training algorithms such as stochastic gradient descent with backpropagation.[13] The context might be a fixed-size window of previous words, so that the network predicts

from a feature vector representing the previous k words.[13] Another option is to use "future" words as well as "past" words as features, so that the estimated probability is

.

This is called a bag-of-words model. When the feature vectors for the words in the context are combined by a continuous operation, this model is referred to as the continuous bag-of-words architecture (CBOW).[14]

A third option that trains slower than the CBOW but performs slightly better is to invert the previous problem and make a neural network learn the context, given a word.[14] More formally, given a sequence of training words , one maximizes the average log-probability

where k, the size of the training context, can be a function of the center word . This is called a skip-gram language model.[15] Bag-of-words and skip-gram models are the basis of the word2vec program.[16]

Instead of using neural net language models to produce actual probabilities, it is common to instead use the distributed representation encoded in the networks' "hidden" layers as representations of words; each word is then mapped onto an n-dimensional real vector called the word embedding, where n is the size of the layer just before the output layer. The representations in skip-gram models have the distinct characteristic that they model semantic relations between words as linear combinations, capturing a form of compositionality. For example, in some such models, if v is the function that maps a word w to its n-d vector representation, then

where ≈ is made precise by stipulating that its right-hand side must be the nearest neighbor of the value of the left-hand side.[14][15]

Other

A positional language model[17] assesses the probability of given words occurring close to one another in a text, not necessarily immediately adjacent. Similarly, bag-of-concepts models[18] leverage the semantics associated with multi-word expressions such as buy_christmas_present, even when they are used in information-rich sentences like "today I bought a lot of very nice Christmas presents".

Despite the limited successes in using neural networks,[19] authors acknowledge the need for other techniques when modelling sign languages.

Notable language models

Notable language models, include:

  • Pathways Language Model (PaLM) 540 billion parameter model, from Google Research.[20]
  • Generalist Language Model (GLaM) 1 trillion parameter model, from Google Research[21]
  • Language Models for Dialog Applications (LaMDA) 137 billion parameter model from Google Research[22]
  • Megatron-Turing NLG 530 billion parameter model, from Microsoft/Nvidia[23]
  • DreamFusion/Imagen 3D image generation from Google Research[24]
  • Get3D from Nvidia[25]
  • MineClip from Nvidia[26]
  • BLOOM: BigScience Large Open-science Open-access Multilingual Language Model with 176 billion parameters.
  • GPT-2: Generative Pre-trained Transformer 2 with 1.5 billion parameters.
  • GPT-3: Generative Pre-trained Transformer 3, with the unprecedented size of 2048-token-long context and 175 billion parameters (requiring 800 GB of storage).
  • GPT-3.5/ChatGPT/InstructGPT from OpenAI[27]
  • BERT: Bidirectional Encoder Representations from Transformers (BERT)
  • GPT-NeoX-20B: An Open-Source Autoregressive Language Model with 20 billion parameters.
  • OPT-175B by Meta AI: another 175-billion-parameter language model. It is available to the broader AI research community.
  • Point-E by OpenAI: a 3D model generator.[28]
  • RT-1 by Google: a model for operating robots[29]
  • ERNIE-Code by Baidu: a 560m parameter multilingual coding model[29]
  • VALL-E text to speech synthesis based on 3-second speech sample[30] It was pre-trained on 60,000 hours of English speech from 7,000 unique speakers (dataset: LibriLight).[31]

Hugging Face hosts a set of publicly available language models for developers to build applications using machine learning.

Evaluation and benchmarks

Evaluation of the quality of language models is mostly done by comparison to human created sample benchmarks created from typical language-oriented tasks. Other, less established, quality tests examine the intrinsic character of a language model or compare two such models. Since language models are typically intended to be dynamic and to learn from data it sees, some proposed models investigate the rate of learning, e.g. through inspection of learning curves. [32]

Various data sets have been developed to use to evaluate language processing systems.[11] These include:

  • Corpus of Linguistic Acceptability[33]
  • GLUE benchmark[34]
  • Microsoft Research Paraphrase Corpus[35]
  • Multi-Genre Natural Language Inference
  • Question Natural Language Inference
  • Quora Question Pairs[36]
  • Recognizing Textual Entailment[37]
  • Semantic Textual Similarity Benchmark
  • SQuAD question answering Test[38]
  • Stanford Sentiment Treebank[39]
  • Winograd NLI

Criticism

Although contemporary language models, such as GPT-2, can be shown to match human performance on some tasks, it is not clear they are plausible cognitive models. For instance, recurrent neural networks have been shown to learn patterns humans do not learn and fail to learn patterns that humans do learn.[40]

See also

Notes

  1. ^ See Heaps' law.

References

Citations

  1. ^ Jurafsky, Dan; Martin, James H. (2021). "N-gram Language Models". Speech and Language Processing (3rd ed.). Retrieved 24 May 2022.
  2. ^ Kuhn, Roland, and Renato De Mori. "A cache-based natural language model for speech recognition." IEEE transactions on pattern analysis and machine intelligence 12.6 (1990): 570-583.
  3. ^ a b Andreas, Jacob, Andreas Vlachos, and Stephen Clark. "Semantic parsing as machine translation." Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). 2013.
  4. ^ Pham, Vu, et al. "Dropout improves recurrent neural networks for handwriting recognition." 2014 14th International Conference on Frontiers in Handwriting Recognition. IEEE, 2014.
  5. ^ Htut, Phu Mon, Kyunghyun Cho, and Samuel R. Bowman. "Grammar induction with neural language models: An unusual replication." arXiv preprint arXiv:1808.10000 (2018).
  6. ^ Ponte, Jay M.; Croft, W. Bruce (1998). A language modeling approach to information retrieval. Proceedings of the 21st ACM SIGIR Conference. Melbourne, Australia: ACM. pp. 275–281. doi:10.1145/290941.291008.
  7. ^ Hiemstra, Djoerd (1998). A linguistically motivated probabilistically model of information retrieval. Proceedings of the 2nd European conference on Research and Advanced Technology for Digital Libraries. LNCS, Springer. pp. 569–584. doi:10.1007/3-540-49653-X_34.
  8. ^ Christopher D. Manning, Prabhakar Raghavan, Hinrich Schütze: An Introduction to Information Retrieval, pages 237–240. Cambridge University Press, 2009
  9. ^ Buttcher, Clarke, and Cormack. Information Retrieval: Implementing and Evaluating Search Engines. pg. 289–291. MIT Press.
  10. ^ Craig Trim, What is Language Modeling?, April 26th, 2013.
  11. ^ a b Devlin, Jacob; Chang, Ming-Wei; Lee, Kenton; Toutanova, Kristina (10 October 2018). "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding". arXiv:1810.04805 [cs.CL].
  12. ^ Karpathy, Andrej. "The Unreasonable Effectiveness of Recurrent Neural Networks".
  13. ^ a b c Bengio, Yoshua (2008). "Neural net language models". Scholarpedia. Vol. 3. p. 3881. Bibcode:2008SchpJ...3.3881B. doi:10.4249/scholarpedia.3881.
  14. ^ a b c Mikolov, Tomas; Chen, Kai; Corrado, Greg; Dean, Jeffrey (2013). "Efficient estimation of word representations in vector space". arXiv:1301.3781 [cs.CL].
  15. ^ a b Mikolov, Tomas; Sutskever, Ilya; Chen, Kai; Corrado irst4=Greg S.; Dean, Jeff (2013). Distributed Representations of Words and Phrases and their Compositionality (PDF). Advances in Neural Information Processing Systems. pp. 3111–3119.{{cite conference}}: CS1 maint: numeric names: authors list (link)
  16. ^ Harris, Derrick (16 August 2013). "We're on the cusp of deep learning for the masses. You can thank Google later". Gigaom.
  17. ^ Lv, Yuanhua; Zhai, ChengXiang (2009). "Positional Language Models for Information Retrieval in" (PDF). Proceedings. 32nd international ACM SIGIR conference on Research and development in information retrieval (SIGIR).
  18. ^ Cambria, Erik; Hussain, Amir (28 July 2012). Sentic Computing: Techniques, Tools, and Applications. Springer Netherlands. ISBN 978-94-007-5069-2.
  19. ^ Mocialov, Boris; Hastie, Helen; Turner, Graham (August 2018). "Transfer Learning for British Sign Language Modelling". Proceedings of the Fifth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial 2018): 101–110. arXiv:2006.02144. Retrieved 14 March 2020.
  20. ^ Narang, Sharan; Chowdhery, Aakanksha (4 April 2022). "Pathways Language Model (PaLM): Scaling to 540 Billion Parameters for Breakthrough Performance". ai.googleblog.com. Retrieved 5 December 2022.
  21. ^ Dai, Andrew M; Du, Nan (9 December 2021). "More Efficient In-Context Learning with GLaM". ai.googleblog.com. Retrieved 5 December 2022.
  22. ^ Cheng, Heng-Tze; Thoppilan, Romal (21 January 2022). "LaMDA: Towards Safe, Grounded, and High-Quality Dialog Models for Everything". ai.googleblog.com. Retrieved 5 December 2022.
  23. ^ Smith, Shaden; Patwary, Mostofa; Norick, Brandon; LeGresley, Patrick; Rajbhandari, Samyam; Casper, Jared; Liu, Zhun; Prabhumoye, Shrimai; Zerveas, George; Korthikanti, Vijay; Zhang, Elton; Child, Rewon; Aminabadi, Reza Yazdani; Bernauer, Julie; Song, Xia (4 February 2022). "Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model". arXiv:2201.11990. {{cite journal}}: Cite journal requires |journal= (help)
  24. ^ Poole, Ben; Jain, Ajay; Barron, Jonathan T.; Mildenhall, Ben (2022). "DreamFusion: Text-to-3D using 2D Diffusion". Retrieved 5 December 2022.
  25. ^ Gao, Jun; Shen, Tianchang; Zian, Wang; Chen, Wenzheng; Yin, Kangxue; Li, Diaqing; Litany, Or; Gojcic, Zan; Fidler, Sanja (2022). "GET3D: A Generative Model of High Quality 3D Textured Shapes Learned from Images" (PDF). nv-tlabs.github.io. Retrieved 5 December 2022.
  26. ^ Fan, Linxi; Wang, Guanzhi; Jiang, Yunfan; Mandlekar, Ajay; Yang, Yuncong; Zhu, Haoyi; Tang, Andrew; Huang, De-An; Zhu, Yuke; Anandkumar, Anima (17 June 2022). "MineDojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge". arXiv:2206.08853. {{cite journal}}: Cite journal requires |journal= (help)
  27. ^ "ChatGPT: Optimizing Language Models for Dialogue". OpenAI. 30 November 2022. Retrieved 5 December 2022.
  28. ^ Wiggers, Kyle (20 December 2022). "OpenAI releases Point-E, an AI that generates 3D models". TechCrunch. Retrieved 25 December 2022.
  29. ^ a b "Import AI 313: Smarter robots via foundation models; Stanford trains a small best-in-class medical LM; Baidu builds a multilingual coding dataset". us13.campaign-archive.com. Retrieved 4 January 2023. {{cite web}}: |first= missing |last= (help)
  30. ^ "VALL-E". valle-demo.github.io. Retrieved 6 January 2023.
  31. ^ Clark, Jack (January 2022). "Import AI 314: Language models + text-to-speech; emergent cooperation in wargames; ICML bans LLM-written papers". us13.campaign-archive.com. Retrieved 15 January 2023.
  32. ^ Karlgren, Jussi; Schutze, Hinrich (2015), "Evaluating Learning Language Representations", International Conference of the Cross-Language Evaluation Forum, Lecture Notes in Computer Science, Springer International Publishing, pp. 254–260, doi:10.1007/978-3-319-64206-2_8, ISBN 9783319642055
  33. ^ "The Corpus of Linguistic Acceptability (CoLA)". nyu-mll.github.io. Retrieved 25 February 2019.
  34. ^ "GLUE Benchmark". gluebenchmark.com. Retrieved 25 February 2019.
  35. ^ "Microsoft Research Paraphrase Corpus". Microsoft Download Center. Retrieved 25 February 2019.
  36. ^ Aghaebrahimian, Ahmad (2017), "Quora Question Answer Dataset", Text, Speech, and Dialogue, Lecture Notes in Computer Science, vol. 10415, Springer International Publishing, pp. 66–73, doi:10.1007/978-3-319-64206-2_8, ISBN 9783319642055
  37. ^ Sammons, V.G.Vinod Vydiswaran, Dan Roth, Mark; Vydiswaran, V.G.; Roth, Dan. "Recognizing Textual Entailment" (PDF). Retrieved 24 February 2019.{{cite web}}: CS1 maint: multiple names: authors list (link)
  38. ^ "The Stanford Question Answering Dataset". rajpurkar.github.io. Retrieved 25 February 2019.
  39. ^ "Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank". nlp.stanford.edu. Retrieved 25 February 2019.
  40. ^ Hornstein, Norbert; Lasnik, Howard; Patel-Grosz, Pritty; Yang, Charles (9 January 2018). Syntactic Structures after 60 Years: The Impact of the Chomskyan Revolution in Linguistics. Walter de Gruyter GmbH & Co KG. ISBN 978-1-5015-0692-5.

Sources

  • "A Language Modeling Approach to Information Retrieval". Research and Development in Information Retrieval. 1998. pp. 275–281. CiteSeerX 10.1.1.117.4237. {{cite conference}}: Unknown parameter |authors= ignored (help)
  • "A General Language Model for Information Retrieval". Research and Development in Information Retrieval. 1999. pp. 279–280. CiteSeerX 10.1.1.21.6467. {{cite conference}}: Unknown parameter |authors= ignored (help)
  • Chen, Stanley; Joshua Goodman (1998). An Empirical Study of Smoothing Techniques for Language Modeling (Technical report). Harvard University. CiteSeerX 10.1.1.131.5458.