History of machine translation

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Machine translation generally started in the 1950s, although work can be found from earlier periods. One of the early projects (1954) was the Georgetown experiment. It involved fully automatic translation of more than sixty Russian sentences into English. The experiment was a great success and ushered in an era of significant funding for machine translation research in the United States. The researchers of teh Georgetown experiment claimed that within three or five years, machine translation would be a solved problem.[1] In the Soviet Union, similar experiments were performed shortly after.[2]

However, real progress was much slower. In 1966, the ALPAC report found that ten years of research had failed to fulfill the expectations of the Georgetown experiment. Subsequently, funding for machine translation was dramatically reduced. Starting in the late 1980s, as computational power increased and became less expensive, more interest began to be shown in statistical models for machine translation.

Today there is still no system that provides the holy grail of "fully automatic high quality translation of unrestricted text".[3][4][5] However, there are many programs now available that are capable of providing useful output within strict constraints; several of them are available online, such as Google Translate and the SYSTRAN system which powers AltaVista's (Yahoo's since May 9, 2008) BabelFish.

The beginning[edit]

The history of machine translation originates in the seventeenth century, when philosophers such as Leibniz and Descartes put forward proposals for codes which would relate words between languages. All of these proposals remained theoretical, and none resulted in the development of an actual machine.

The first patents for "translating machines" were applied for in the mid-1930s. One proposal, by Georges Artsrouni, was simply an automatic bilingual dictionary using paper tape. Another proposal, by Peter Troyanskii, a Russian, was more detailed. It included both the bilingual dictionary, as well as a method for dealing with grammatical roles between languages, based on Esperanto. The system was split up into three stages: the first was for a native-speaking editor in the sources language to organize the words into their logical forms and syntactic functions; the second was for the machine to "translate" these forms into the target language; and the third was for a native-speaking editor in the target language to normalize this output. His scheme remained unknown until the late 1950s, by which time computers were well-known.

The early years[edit]

The first proposals for machine translation using computers were put forward by Warren Weaver, a researcher at the Rockefeller Foundation, in his July, 1949 "Translation memorandum".[6] These proposals were based on information theory, successes of code breaking during the second world war and speculation about universal underlying principles of natural language.

A few years after these proposals, research began in earnest at many universities in the United States. On 7 January 1954, the Georgetown-IBM experiment, the first public demonstration of an MT system, was held in New York at the head office of IBM. The demonstration was widely reported in the newspapers and received much public interest. The system itself, however, was no more than what today would be called a "toy" system, having just 250 words and translating just 49 carefully selected Russian sentences into English — mainly in the field of chemistry. Nevertheless it encouraged the view that machine translation was imminent — and in particular stimulated the financing of the research, not just in the US but worldwide.[1]

Early systems used large bilingual dictionaries and hand-coded rules for fixing the word order in the final output. This was eventually found to be too restrictive, and developments in linguistics at the time, for example generative linguistics and transformational grammar were proposed to improve the quality of translations.

During this time, operational systems were installed. The United States Air Force used a system produced by IBM and Washington University, while the Atomic Energy Commission in the United States and Euratom in Italy used a system developed at Georgetown University. While the quality of the output was poor, it nevertheless met many of the customers' needs, chiefly in terms of speed.

At the end of the 1950s, an argument was put forward by Yehoshua Bar-Hillel, a researcher asked by the US government to look into machine translation against the possibility of "Fully Automatic High Quality Translation" by machines. The argument is one of semantic ambiguity or double-meaning. Consider the following sentence:

Little John was looking for his toy box. Finally he found it. The box was in the pen.

The word pen may have two meanings, the first meaning something you use to write with, the second meaning a container of some kind. To a human, the meaning is obvious, but he claimed that without a "universal encyclopedia" a machine would never be able to deal with this problem. Today, this type of semantic ambiguity can be solved by writing source texts for machine translation in a controlled language that uses a vocabulary in which each word has exactly one meaning.

The 1960s, the ALPAC report and the seventies[edit]

Research in the 1960s in both the Soviet Union and the United States concentrated mainly on the Russian-English language pair. Chiefly the objects of translation were scientific and technical documents, such as articles from scientific journals. The rough translations produced were sufficient to get a basic understanding of the articles. If an article discussed a subject deemed to be of security interest, it was sent to a human translator for a complete translation; if not, it was discarded.

A great blow came to machine translation research in 1966 with the publication of the ALPAC report. The report was commissioned by the US government and performed by ALPAC, the Automatic Language Processing Advisory Committee, a group of seven scientists convened by the US government in 1964. The US government was concerned that there was a lack of progress being made despite significant expenditure. It concluded that machine translation was more expensive, less accurate and slower than human translation, and that despite the expenses, machine translation was not likely to reach the quality of a human translator in the near future.

The report, however, recommended that tools be developed to aid translators — automatic dictionaries, for example — and that some research in computational linguistics should continue to be supported.

The publication of the report had a profound impact on research into machine translation in the United States, and to a lesser extent the Soviet Union and United Kingdom. Research, at least in the US, was almost completely abandoned for over a decade. In Canada, France and Germany, however, research continued. In the US the main exceptions were the founders of Systran (Peter Toma) and Logos (Bernard Scott), who established their companies in 1968 and 1970 respectively and served the US Dept of Defense. In 1970, the Systran system was installed for the United States Air Force and subsequently in 1976 by the Commission of the European Communities. The METEO System, developed at the Université de Montréal, was installed in Canada in 1977 to translate weather forecasts from English to French, and was translating close to 80,000 words per day or 30 million words per year until it was replaced by a competitor's system on 30 September 2001.[7]

While research in the 1960s concentrated on limited language pairs and input, demand in the 1970s was for low-cost systems that could translate a range of technical and commercial documents. This demand was spurred by the increase of globalisation and the demand for translation in Canada, Europe, and Japan.

The 1980s and early 1990s[edit]

By the 1980s, both the diversity and the number of installed systems for machine translation had increased. A number of systems relying on mainframe technology were in use, such as Systran, Logos, Ariane-G5, and Metal.

As a result of the improved availability of microcomputers, there was a market for lower-end machine translation systems. Many companies took advantage of this in Europe, Japan, and the USA. Systems were also brought onto the market in China, Eastern Europe, Korea, and the Soviet Union.

During the 1980s there was a lot of activity in MT in Japan especially. With the Fifth generation computer Japan intended to leap over its competition in computer hardware and software, and one project that many large Japanese electronics firms found themselves involved in was creating software for translating to and from English (Fujitsu, Toshiba, NTT, Brother, Catena, Matsushita, Mitsubishi, Sharp, Sanyo, Hitachi, NEC, Panasonic, Kodensha, Nova, Oki).

Research during the 1980s typically relied on translation through some variety of intermediary linguistic representation involving morphological, syntactic, and semantic analysis.

At the end of the 1980s there was a large surge in a number of novel methods for machine translation. One system was developed at IBM that was based on statistical methods. Makoto Nagao and his group used methods based on large numbers of example translations, a technique which is now termed example-based machine translation.[8][9] A defining feature of both of these approaches was the lack of syntactic and semantic rules and reliance instead on the manipulation of large text corpora.

During the 1990s, encouraged by successes in speech recognition and speech synthesis, research began into speech translation with the development of the German Verbmobil project.

There was significant growth in the use of machine translation as a result of the advent of low-cost and more powerful computers. It was in the early 1990s that machine translation began to make the transition away from large mainframe computers toward personal computers and workstations. Two companies that led the PC market for a time were Globalink and MicroTac, following which a merger of the two companies (in December 1994) was found to be in the corporate interest of both. Intergraph and Systran also began to offer PC versions around this time. Sites also became available on the internet, such as AltaVista's Babel Fish (using Systran technology) and Google Language Tools (also initially using Systran technology exclusively).

2000s[edit]

The field of machine translation has seen major changes in the last few years. Currently a large amount of research is being done into statistical machine translation and example-based machine translation. In the area of speech translation, research has focused on moving from domain-limited systems to domain-unlimited translation systems. In different research projects in Europe (like TC-STAR)[10] and in the United States (STR-DUST and US-DARPA-GALE)[11] solutions for automatically translating Parliamentary speeches and broadcast news have been developed. In these scenarios the domain of the content is no longer limited to any special area, but rather the speeches to be translated cover a variety of topics. More recently, the French-German project Quaero investigates possibilities to make use of machine translations for a multi-lingual internet. The project seeks to translate not only webpages, but also videos and audio files found on the internet.

Today, only a few companies use statistical machine translation commercially, e.g. Asia Online, SDL / Language Weaver (sells translation products and services), Google (uses their proprietary statistical MT system for some language combinations in Google's language tools), Microsoft (uses their proprietary statistical MT system to translate knowledge base articles), and Ta with you (offers a domain-adapted machine translation solution based on statistical MT with some linguistic knowledge). There has been a renewed interest in hybridisation, with researchers combining syntactic and morphological (i.e., linguistic) knowledge into statistical systems, as well as combining statistics with existing rule-based systems.

See also[edit]

Notes[edit]

  1. ^ a b Hutchins, J. (2005). "The history of machine translation in a nutshell". [self-published source]
  2. ^ Madsen, Mathias Winther (23 December 2009). The Limits of Machine Translation (Thesis). University of Copenhagen. p. 11. 
  3. ^ Melby, Alan K. (1995). The Possibility of Language. Amsterdam: J. Benjamins. pp. 27–41. ISBN 9027216142. 
  4. ^ Wooten, Adam (February 14, 2006). "A Simple Model Outlining Translation Technology". T&I Business. 
  5. ^ "Appendix III of 'The present status of automatic translation of languages'". Advances in Computers 1. 1960. pp. 158–163.  Reprinted in Y.Bar-Hillel (1964). Language and information. Massachusetts: Addison-Wesley. pp. 174–179. 
  6. ^ "Weaver memorandum". March 1949. Archived from the original on 2006-10-05. 
  7. ^ "PROCUREMENT PROCESS". Canadian International Trade Tribunal. 30 July 2002. Archived from the original on 2011-07-06. Retrieved 2007-02-10. 
  8. ^ Nagao, Makoto (1984). "A Framework of a Mechanical Translation Between Japanese and English by Analogy Principle". Procedures Of the International NATO Symposium on Artificial and Human Intelligence. New York: Elsevier North-Holland, Inc. pp. 173–180. ISBN 0-444-86545-4. 
  9. ^ "the Association for Computational Linguistics – 2003 ACL Lifetime Achievement Award". Association for Computational Linguistics. Retrieved 2010-03-10. 
  10. ^ "TC-Star". Retrieved 2010-10-25. 
  11. ^ "U.S.-DARPA-GALE". Retrieved 2010-10-25. 

References[edit]

Further reading[edit]