Jump to content

Language identification: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
CmdrObot (talk | contribs)
m Wikipedia URL→wikilink (2)
Line 38: Line 38:


==External links==
==External links==

* AlchemyAPI web-based language identification API [http://www.alchemyapi.com/api/lang/]


* PetaMem Language Identification: ngram, nvect and smart methods [http://nlp.petamem.com/en/langident.cgi]
* PetaMem Language Identification: ngram, nvect and smart methods [http://nlp.petamem.com/en/langident.cgi]

Revision as of 03:19, 29 April 2009

Language identification is the process of determining which natural language given content is in. Traditionally, identification of written language - as practiced, for instance, in library science - has relied on manually identifying frequent words and letters known to be characteristic of particular languages. More recently, computational approaches have been applied to the problem, by viewing language identification as a kind of text categorization, a Natural Language Processing approach which relies on statistical methods.

Non-Computational Approaches

In the field of library science, language identification is important for categorizing materials. As librarians often have to categorize materials which are in languages they are not familiar with, they sometimes rely on tables of frequent words and distinctive letters or characters to help them identify languages. While identifying a single such word or character may not suffice to distinguish a language from another with a similar orthography, identifying several is often highly reliable.

Statistical Approaches

This can be done by comparing the compressibility of the text to the compressibility of texts in the known languages. This approach is known as mutual information based distance measure [1]. The same techniques can also be used to empirically construct family trees of languages which closely correspond to the trees constructed using historical methods.

Another technique, as described by Dunning (1994) is to create a language n-gram model from a "training text" for each of the languages. Then, for any piece of text needing to be identified, a similar model is made, and the two models are compared. The stored language model which is most similar to the model from the piece of text is the most likely language.

See also

References

  • Benedetto, D., E. Caglioti and V. Loreto. Language trees and zipping. Physical Review Letters, 88:4 (2002) [2], [3], [4].
  • Cilibrasi, Rudi and Paul M.B. Vitanyi. "Clustering by compression". IEEE Transactions on Information Theory 51(4), April 2005, 1523-1545. [5]
  • Dunning, T. (1994) "Statistical Identification of Language". Technical Report MCCS 94-273, New Mexico State University, 1994.
  • Goodman, Joshua. (2002) Extended comment on "Language Trees and Zipping". Microsoft Research, Feb 21 2002. (This is a criticism of the data compression in favor of the Naive Bayes method.) [6]
  • Poutsma, Arjen. (2001) Applying Monte Carlo techniques to language identification. SmartHaven, Amsterdam. Presented at CLIN 2001.
  • The Economist. (2002) "The elements of style: Analysing compressed data leads to impressive results in linguistics [7]
  • Survey of the State of the Art in Human Language Technology, (1996), section 8.7 Automatic Language Identification [8]
  • AlchemyAPI web-based language identification API [9]
  • PetaMem Language Identification: ngram, nvect and smart methods [10]
  • Links to LID tools by Gertjan van Noord [11]
  • Implementation of an n-gram based LID tool in Python and Scheme by Damir Cavar [12]
  • Xerox Language Identifier [13]
  • What Language Is This? Online language identification tool written in JavaScript [14]