Jump to content

Word-sense disambiguation: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
use more apt hatnote template and rm self-reference
→‎Approaches: removed needless commas
Line 32: Line 32:
Deep approaches presume access to a comprehensive body of; [[Commonsense knowledge bases|world knowledge]]. Knowledge, such as "you can go fishing for a type of fish, but not for low frequency sounds" and "songs have low frequency sounds as parts, but not types of fish", is then used to determine in which sense the word is used. These approaches are not very successful in practice, mainly because such a body of knowledge does not exist in a computer-readable format, outside of very limited domains. However, if such knowledge did exist, then deep approaches would be much more accurate than the shallow approaches. Also, there is a long tradition in [[computational linguistics]], of trying such approaches in terms of coded knowledge and in some cases, it is hard to say clearly whether the knowledge involved is linguistic or world knowledge. The first attempt was that by Margaret Masterman and her colleagues, at the Cambridge Language Research Unit in England, in the 1950s. This attempt used as data a punched-card version of Roget's Thesaurus and its numbered "heads", as an indicator of topics and looked for repetitions in text, using a set intersection algorithm. It was not very successful, as is described in some detail in (Wilks, Y. et al., 1996), but had strong relationships to later work, especially Yarowsky's machine learning optimisation of a thesaurus method in the 1990s.
Deep approaches presume access to a comprehensive body of; [[Commonsense knowledge bases|world knowledge]]. Knowledge, such as "you can go fishing for a type of fish, but not for low frequency sounds" and "songs have low frequency sounds as parts, but not types of fish", is then used to determine in which sense the word is used. These approaches are not very successful in practice, mainly because such a body of knowledge does not exist in a computer-readable format, outside of very limited domains. However, if such knowledge did exist, then deep approaches would be much more accurate than the shallow approaches. Also, there is a long tradition in [[computational linguistics]], of trying such approaches in terms of coded knowledge and in some cases, it is hard to say clearly whether the knowledge involved is linguistic or world knowledge. The first attempt was that by Margaret Masterman and her colleagues, at the Cambridge Language Research Unit in England, in the 1950s. This attempt used as data a punched-card version of Roget's Thesaurus and its numbered "heads", as an indicator of topics and looked for repetitions in text, using a set intersection algorithm. It was not very successful, as is described in some detail in (Wilks, Y. et al., 1996), but had strong relationships to later work, especially Yarowsky's machine learning optimisation of a thesaurus method in the 1990s.


Shallow approaches don't try to understand the text. They just consider the surrounding words, using information such as "if ''bass'' has words ''sea'' or ''fishing'' nearby, it probably is in the fish sense; if ''bass'' has the words ''music'' or ''song'' nearby, it is probably in the music sense." These rules can be automatically derived by the computer, using a training corpus of words tagged with their word senses. This approach, while theoretically not as powerful as deep approaches, gives superior results in practice, due to the computer's limited world knowledge. However, it can be confused by sentences, like ''The dogs bark at the tree'', which contains the word ''bark'' near both ''tree'' and ''dogs''.
Shallow approaches don't try to understand the text. They just consider the surrounding words, using information such as "if ''bass'' has words ''sea'' or ''fishing'' nearby, it probably is in the fish sense; if ''bass'' has the words ''music'' or ''song'' nearby, it is probably in the music sense." These rules can be automatically derived by the computer, using a training corpus of words tagged with their word senses. This approach, while theoretically not as powerful as deep approaches, gives superior results in practice, due to the computer's limited world knowledge. However, it can be confused by sentences like ''The dogs bark at the tree'' which contains the word ''bark'' near both ''tree'' and ''dogs''.


These approaches normally work by defining a window of ''N'' content words around each word to be disambiguated in the corpus, and statistically analyzing those ''N'' surrounding words. Two shallow approaches used to train and then disambiguate are ''[[Naive Bayes classifier|Naïve Bayes classifier]]s'' and ''[[decision tree]]s''. In recent research, kernel based methods such as [[support vector machine]]s have shown superior performance in [[supervised learning]]. But over the last few years, there hasn't been any major improvement in performance of any of these methods.
These approaches normally work by defining a window of ''N'' content words around each word to be disambiguated in the corpus, and statistically analyzing those ''N'' surrounding words. Two shallow approaches used to train and then disambiguate are ''[[Naive Bayes classifier|Naïve Bayes classifier]]s'' and ''[[decision tree]]s''. In recent research, kernel based methods such as [[support vector machine]]s have shown superior performance in [[supervised learning]]. But over the last few years, there hasn't been any major improvement in performance of any of these methods.

Revision as of 16:26, 6 August 2009

In computational linguistics, word sense disambiguation (WSD) is the process of identifying which sense of a word is used in any given sentence, when the word has a number of distinct senses.

For example, consider two examples of the distinct senses that exist for the (written) word bass:

  1. a type of fish
  2. tones of low frequency

and the sentences:

  1. I went fishing for some sea bass
  2. The bass line of the song is too weak

To a human, it is obvious that the first sentence is using the word bass, as in the former sense above and in the second sentence, the word bass is being used as in the latter sense below. Developing algorithms to replicate this human ability can often be a difficult task.

Difficulties

One problem with word sense disambiguation is deciding what the senses are. In cases like the word bass above, at least some senses are obviously different. In other cases, however, the different senses can be closely related (one meaning being a metaphorical or metonymic extension of another), and in such cases division of words into senses becomes much more difficult. Different dictionaries will provide different divisions of words into senses. One solution some researchers have used is to choose a particular dictionary, and just use its set of senses. Generally, however, research results using broad distinctions in senses have been much better than those using narrow, so most researchers ignore the fine-grained distinctions in their work.

Another problem is inter-judge variance. WSD systems are normally tested by having their results on a task compared against those of a human. However, humans do not agree on the task at hand — give a list of senses and sentences, and humans will not always agree on which word belongs in which sense. A computer cannot be expected to give better performance on such a task than a human (indeed, since the human serves as the standard, the computer being better than the human is incoherent), so the human performance serves as an upper bound. Human performance, however, is much better on coarse-grained than fine-grained distinctions, so this again is why research on coarse-grained distinctions is most useful.

Some AI researchers like Douglas Lenat argue that one cannot parse meanings from words without some form of common sense ontology. For example:

Jill and Mary are sisters.

(they are sisters of each other). compared with:

Jill and Mary are mothers.

(each is independently a mother). To properly identify senses of words one must know common sense facts.[1]

Approaches

As in all natural language processing, there are two main approaches to WSD — deep approaches and shallow approaches.

Deep approaches presume access to a comprehensive body of; world knowledge. Knowledge, such as "you can go fishing for a type of fish, but not for low frequency sounds" and "songs have low frequency sounds as parts, but not types of fish", is then used to determine in which sense the word is used. These approaches are not very successful in practice, mainly because such a body of knowledge does not exist in a computer-readable format, outside of very limited domains. However, if such knowledge did exist, then deep approaches would be much more accurate than the shallow approaches. Also, there is a long tradition in computational linguistics, of trying such approaches in terms of coded knowledge and in some cases, it is hard to say clearly whether the knowledge involved is linguistic or world knowledge. The first attempt was that by Margaret Masterman and her colleagues, at the Cambridge Language Research Unit in England, in the 1950s. This attempt used as data a punched-card version of Roget's Thesaurus and its numbered "heads", as an indicator of topics and looked for repetitions in text, using a set intersection algorithm. It was not very successful, as is described in some detail in (Wilks, Y. et al., 1996), but had strong relationships to later work, especially Yarowsky's machine learning optimisation of a thesaurus method in the 1990s.

Shallow approaches don't try to understand the text. They just consider the surrounding words, using information such as "if bass has words sea or fishing nearby, it probably is in the fish sense; if bass has the words music or song nearby, it is probably in the music sense." These rules can be automatically derived by the computer, using a training corpus of words tagged with their word senses. This approach, while theoretically not as powerful as deep approaches, gives superior results in practice, due to the computer's limited world knowledge. However, it can be confused by sentences like The dogs bark at the tree which contains the word bark near both tree and dogs.

These approaches normally work by defining a window of N content words around each word to be disambiguated in the corpus, and statistically analyzing those N surrounding words. Two shallow approaches used to train and then disambiguate are Naïve Bayes classifiers and decision trees. In recent research, kernel based methods such as support vector machines have shown superior performance in supervised learning. But over the last few years, there hasn't been any major improvement in performance of any of these methods.

It is instructive to compare the word sense disambiguation problem with the problem of part-of-speech tagging. Both involve disambiguating or tagging with words, be it with senses or parts of speech. However, algorithms used for one do not tend to work well for the other, mainly because the part of speech of a word is primarily determined by the immediately adjacent one to three words, whereas the sense of a word may be determined by words further away. The success rate for part-of-speech tagging algorithms is at present much higher than that for WSD, state-of-the art being around 95% accuracy or better, as compared to less than 75% accuracy in word sense disambiguation with supervised learning. These figures are typical for English, and may be very different from those for other languages.

Another aspect of word sense disambiguation that differentiates it from part-of-speech tagging is the availability of training data. While it is relatively easy to assign parts of speech to text, training people to tag senses is far more difficult [2]. While users can memorize all of the possible parts of speech a word can take, it is impossible for individuals to memorize all of the senses a word can take. Thus, many word sense disambiguation algorithms use semi-supervised learning, which allows both labeled and unlabeled data. The Yarowsky algorithm was an early example of such an algorithm.

The Yarowsky algorithm uses the ‘One sense per collocation’ and the ‘One sense per discourse’ properties of human languages for word sense disambiguation. From observation, words tend to exhibit only one sense in most given discourse and in a given collocation.

See also

Notes

  1. ^ "Computers versus Common Sense". Retrieved 2008-12-10.
  2. ^ Fellbaum, Christiane 1997. Analysis of a handtagging task. Proceedings of ANLP-97 Workshop on Tagging Text with Lexical Semantics: Why, What, and How? Washington D.C., USA.

References

  • Wilks, Y., Slator, B., Guthrie, L. (1996) Electric Words: dictionaries, computers and meanings. Cambridge, MA: MIT Press.
  • X.Y.Chou, (2007), Yarowsky’s unsupervised algorithm, Oxford Computing Lab.