||This article may be too technical for most readers to understand. (February 2013) (Learn how and when to remove this template message)|
The bag-of-words model is a simplifying representation used in natural language processing and information retrieval (IR). In this model, a text (such as a sentence or a document) is represented as the bag (multiset) of its words, disregarding grammar and even word order but keeping multiplicity. The bag-of-words model has also been used for computer vision.
The following models a text document using bag-of-words.
Here are two simple text documents:
(1) John likes to watch movies. Mary likes movies too.
(2) John also likes to watch football games.
Based on these two text documents, a list is constructed as follows:
[ "John", "likes", "to", "watch", "movies", "also", "football", "games", "Mary", "too" ]
In practice, the Bag-of-words model is mainly used as a tool of feature generation. After transforming the text into a "bag of words", we can calculate various measures to characterize the text. The most common type of characteristics, or features calculated from the Bag-of-words model is term frequency, namely, the number of times a term appears in the text. For the example above, we can construct the following two lists to record the term frequencies of all the distinct words:
(1) [1, 2, 1, 1, 2, 0, 0, 0, 1, 1] (2) [1, 1, 1, 1, 0, 1, 1, 1, 0, 0]
Each entry of the lists refers to count of the corresponding entry in the list (this is also the histogram representation). For example, in the first list (which represents document 1), the first two entries are "1,2". The first entry corresponds to the word "John" which is the first word in the list, and its value is "1" because "John" appears in the first document 1 time. Similarly, the second entry corresponds to the word "likes" which is the second word in the list, and its value is "2" because "likes" appears in the first document 2 times. This list (or vector) representation does not preserve the order of the words in the original sentences, which is just the main feature of the Bag-of-words model. This kind of representation has several successful applications, for example email filtering.
However, term frequencies are not necessarily the best representation for the text. Common words like "the", "a", "to" are almost always have the highest term frequency in the text, thus having a high raw count does not necessarily means that the corresponding word is more important. To address this problem, one of the most popular way to "normalize" the term frequencies is to weight term by the inverse of document frequency, or tf–idf. Additionally, for the specific purpose of classification supervised alternatives have been developed that take into account the class label of a document. Lastly, binary (presence/absence or 1/0) weighting is used in place of frequencies for some problems. (For instance, this option is implemented in the WEKA machine learning software system.)
Bag-of-word model is an orderless document representation—only the counts of words mattered. For instance, in the above example "John likes to watch movies. Mary likes movies too", the bag-of-words representation will not reveal the fact that a person name is always followed by the verb "likes" in this text. As an alternative, the n-gram model can be used to store this spatial information within the text. Applying to the same example above, a bigram model will parse the text into following units and store the term frequency of each unit as before.
[ "John likes", "likes to", "to watch", "watch movies", "Mary likes", "likes movies", "movies too", ]
Conceptually, we can view bag-of-word model as a special case of the n-gram model, with n=1. See language model for a more detailed discussion.
A common alternative to the use of dictionaries is the hashing trick, where words are directly mapped to indices with a hashing function. By mapping words to indices directly with a hash function, no memory is required to store a dictionary. Hash collisions are typically dealt with by using freed-up memory to increase the number of hash buckets. In practice, hashing greatly simplifies the implementation of bag-of-words models and improves their scalability.
Example usage: spam filtering
In Bayesian spam filtering, an e-mail message is modeled as an unordered collection of words selected from one of two probability distributions: one representing spam and one representing legitimate e-mail ("ham"). Imagine that there are two literal bags full of words. One bag is filled with words found in spam messages, and the other bag is filled with words found in legitimate e-mail. While any given word is likely to be found somewhere in both bags, the "spam" bag will contain spam-related words such as "stock", "Viagra", and "buy" much more frequently, while the "ham" bag will contain more words related to the user's friends or workplace.
To classify an e-mail message, the Bayesian spam filter assumes that the message is a pile of words that has been poured out randomly from one of the two bags, and uses Bayesian probability to determine which bag it is more likely to be.
- Additive smoothing
- Bag-of-words model in computer vision
- Document classification
- Document-term matrix
- Feature extraction
- Hashing trick
- Machine learning
- Natural language processing
- Vector space model
- Sivic, Josef (April 2009). "Efﬁcient visual search of videos cast as text retrieval" (PDF). IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 31, NO. 4. IEEE. pp. 591–605.
- Harris, Zellig (1954). "Distributional Structure". Word. 10 (2/3): 146–62.
And this stock of combinations of elements becomes a factor in the way later choices are made ... for language is not merely a bag of words but a tool with particular properties which have been fashioned in the course of its use
- Youngjoong Ko (2012). "A study of term weighting schemes using class information for text classification". SIGIR'12. ACM.
- Weinberger, K. Q.; Dasgupta A.; Langford J.; Smola A.; Attenberg, J. (2009). "Feature hashing for large scale multitask learning,". Proceedings of the 26th Annual International Conference on Machine Learning: 1113–1120. arXiv:.