LEPOR: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
No edit summary
Line 46: Line 46:
* Han, A.L.F., Wong, D.F., Chao, L.S., He, L., Lu, Y., Xing, J., and Zeng, X. (2013a) "Language-independent Model for Machine Translation Evaluation with Reinforced Factors" in ''Proceedings of the Machine Translation Summit XIV (MT SUMMIT 2013), pp. 215-222. Nice, France. Publisher: International Association for Machine Translation. [http://www.mt-archive.info/10/MTS-2013-Han.pdf#! Online paper] [https://github.com/aaronlifenghan/aaron-project-hlepor Open source tool]''
* Han, A.L.F., Wong, D.F., Chao, L.S., He, L., Lu, Y., Xing, J., and Zeng, X. (2013a) "Language-independent Model for Machine Translation Evaluation with Reinforced Factors" in ''Proceedings of the Machine Translation Summit XIV (MT SUMMIT 2013), pp. 215-222. Nice, France. Publisher: International Association for Machine Translation. [http://www.mt-archive.info/10/MTS-2013-Han.pdf#! Online paper] [https://github.com/aaronlifenghan/aaron-project-hlepor Open source tool]''
* Han, A.L.F., Wong, D.F., Chao, L.S., Lu, Y., He, L., Wang, Y., and Zhou, J. (2013b) "A Description of Tunable Machine Translation Evaluation Systems in WMT13 Metrics Task" in ''Proceedings of the Eighth Workshop on Statistical Machine Translation, ACL-WMT13, Sofia, Bulgaria. Association for Computational Linguistics. [http://www.statmt.org/wmt13/pdf/WMT53.pdf Online paper]'' pp. 414–421
* Han, A.L.F., Wong, D.F., Chao, L.S., Lu, Y., He, L., Wang, Y., and Zhou, J. (2013b) "A Description of Tunable Machine Translation Evaluation Systems in WMT13 Metrics Task" in ''Proceedings of the Eighth Workshop on Statistical Machine Translation, ACL-WMT13, Sofia, Bulgaria. Association for Computational Linguistics. [http://www.statmt.org/wmt13/pdf/WMT53.pdf Online paper]'' pp. 414–421
* {{cite journal |doi=10.1155/2014/760301}}
* Han, A.L.F., Wong, D.F., Chao, L.S., He, L., and Lu, Y. (2014) "Unsupervised Quality Estimation Model for English to German Translation and Its Application in Extensive Supervised Evaluation" in ''The Scientific World Journal. Issue: Recent Advances in Information Technology. {{ISSN|1537-744X}}. Hindawi Publishing Corporation. [http://www.hindawi.com/journals/tswj/aip/760301/ Online paper]''
* ACL-WMT. (2013) "[http://www.statmt.org/wmt13/metrics-task.html ACL-WMT13 METRICS TASK]"
* ACL-WMT. (2013) "[http://www.statmt.org/wmt13/metrics-task.html ACL-WMT13 METRICS TASK]"
* Wong, B. T-M, and Kit, C. (2008). "Word choice and word position for automatic MT evaluation" in ''Workshop: MetricsMATR of the Association for Machine Translation in the Americas (AMTA)'', short paper, Waikiki, US.
* Wong, B. T-M, and Kit, C. (2008). "Word choice and word position for automatic MT evaluation" in ''Workshop: MetricsMATR of the Association for Machine Translation in the Americas (AMTA)'', short paper, Waikiki, US.
Line 52: Line 52:
* Han, Lifeng. (2014) "LEPOR: An Augmented Machine Translation Evaluation Metric". Thesis for Master of Science in Software Engineering. University of Macau, Macao. [https://library2.um.edu.mo/etheses/b33358400_ft.pdf] [https://www.researchgate.net/publication/302985356_LEPOR_An_Augmented_Machine_Translation_Evaluation_Metric_Thesis_PPT?ev=prf_pub&_sg=PlxYtBSX_cKSLDaRH0PqMSt41Ro_d5VFDQ0SkqMO0PRZ9J3vfiHv4rup9ltV0wl2.Gdf55mn8h8hztKXcE4RcxM6uNNkcdPpRZNPrzTTo5DVhQ8PaKgzXvhIZTYwCwETm PPT]
* Han, Lifeng. (2014) "LEPOR: An Augmented Machine Translation Evaluation Metric". Thesis for Master of Science in Software Engineering. University of Macau, Macao. [https://library2.um.edu.mo/etheses/b33358400_ft.pdf] [https://www.researchgate.net/publication/302985356_LEPOR_An_Augmented_Machine_Translation_Evaluation_Metric_Thesis_PPT?ev=prf_pub&_sg=PlxYtBSX_cKSLDaRH0PqMSt41Ro_d5VFDQ0SkqMO0PRZ9J3vfiHv4rup9ltV0wl2.Gdf55mn8h8hztKXcE4RcxM6uNNkcdPpRZNPrzTTo5DVhQ8PaKgzXvhIZTYwCwETm PPT]
* Yvette Graham, Timothy Baldwin, and Nitika Mathur. (2015) Accurate evaluation of segment-level machine translation metrics. In NAACL HLT 2015, The 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Denver, Colorado, USA, May 31 - June 5, 2015, pages 1183–1191.
* Yvette Graham, Timothy Baldwin, and Nitika Mathur. (2015) Accurate evaluation of segment-level machine translation metrics. In NAACL HLT 2015, The 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Denver, Colorado, USA, May 31 - June 5, 2015, pages 1183–1191.
* {{cite arxiv |eprint=1605.04515}}
* Han, Lifeng. (2016) "Machine Translation Evaluation Resources and Methods: A Survey". ArXiv Pre-print edition. Presentation in IPRC (Irish Postgraduate Research Conference). [https://arxiv.org/pdf/1605.04515.pdf]
* Jekaterina Novikova, Ondˇrej Dušek, Amanda Cercas Curry, and Verena Rieser. (2017) Why we need new evaluation metrics for NLG. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2241–2252, Copenhagen, Denmark. Association for Computational Linguistics.
* Jekaterina Novikova, Ondˇrej Dušek, Amanda Cercas Curry, and Verena Rieser. (2017) Why we need new evaluation metrics for NLG. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2241–2252, Copenhagen, Denmark. Association for Computational Linguistics.
* {{cite arxiv |eprint=2104.13453}}
* Zeyang Liu, Ke Zhou, and Max L. Wilson. (2021) Meta-evaluation of Conversational Search Evaluation Metrics. arXiv e-prints, page arXiv:2104.13453.
* {{cite arxiv |eprint=2104.13100}}
* Pietro Liguori et al. 2021. Shellcode_IA32: A Dataset for Automatic Shellcode Generation. [https://arxiv.org/abs/2104.13100]
* {{cite arxiv |eprint=2006.14799}}
* A Celikyilmaz, E Clark, J Gao (2020) Evaluation of text generation: A survey. arXiv preprint arXiv:2006.14799, - arxiv.org
* D Qiu, B Rothrock, T Islam, AK Didier, VZ Sun… (2020) SCOTI: Science Captioning of Terrain Images for data prioritization and local image search. Planetary and Space. Elsevier
* D Qiu, B Rothrock, T Islam, AK Didier, VZ Sun… (2020) SCOTI: Science Captioning of Terrain Images for data prioritization and local image search. Planetary and Space. Elsevier
* {{cite journal |doi=10.1007/s10590-019-09233-w}}
* Marzouk, S. & Hansen-Schirra, S. (2019) 'Evaluation of the impact of controlled language on neural machine translation compared to other MT architectures'. Journal of Machine Translation (2019). [https://doi.org/10.1007/s10590-019-09233-w]
* {{cite book |doi=10.1007/978-3-642-40722-2_13}}
* Han A.LF., Wong D.F., Chao L.S., He L., Li S., Zhu L. (2013c) Phrase Tagset Mapping for French and English Treebanks and Its Application in Machine Translation Evaluation. In: Gurevych I., Biemann C., Zesch T. (eds) Language Processing and Knowledge in the Web. Lecture Notes in Computer Science, vol 8105. Springer, Berlin, Heidelberg. [https://doi.org/10.1007/978-3-642-40722-2_13]


{{refend}}
{{refend}}

Revision as of 01:24, 31 August 2021

LEPOR (Length Penalty, Precision, n-gram Position difference Penalty and Recall) is an automatic language independent machine translation evaluation metric with tunable parameters and reinforced factors.

Background

Since IBM proposed and realized the system of BLEU[1] as the automatic metric for Machine Translation (MT) evaluation,[2] many other methods have been proposed to revise or improve it, such as TER, METEOR,[3] etc. However, there exist some problems in the traditional automatic evaluation metrics. Some metrics perform well on certain languages but weak on other languages, which is usually called as a language bias problem. Some metrics rely on a lot of language features or linguistic information, which makes it difficult for other researchers to repeat the experiments. LEPOR is an automatic evaluation metric that tries to address some of the existing problems.[4] LEPOR is designed with augmented factors and the corresponding tunable parameters to address the language bias problem. Furthermore, in the improved version of LEPOR, i.e. the hLEPOR,[5] it tries to use the optimized linguistic features that are extracted from treebanks. Another advanced version of LEPOR is the nLEPOR metric,[6] which adds the n-gram features into the previous factors. So far, the LEPOR metric has been developed into LEPOR series.[7][8]

LEPOR metrics have been studied and analyzed by many researchers from different fields, such as machine translation,[9] Natural-language generation,[10] and searching,[11] and beyond. LEPOR metrics are getting more attention from scientific researchers in natural language processing.

Design

LEPOR [12] is designed with the factors of enhanced length penalty, precision, n-gram word order penalty, and recall. The enhanced length penalty ensures that the hypothesis translation, which is usually translated by machine translation systems, is punished if it is longer or shorter than the reference translation. The precision score reflects the accuracy of the hypothesis translation. The recall score reflects the loyalty of the hypothesis translation to the reference translation or source language. The n-gram based word order penalty factor is designed for the different position orders between the hypothesis translation and reference translation. The word order penalty factor has been proved to be useful by many researchers, such as the work of Wong and Kit (2008).[13]

In light that the word surface string matching metrics were criticised with lack of syntax and semantic awareness, the further developed LEPOR metric (hLEPOR) investigates the integration of linguistic features, such as part of speech (POS).[14][15] POS is introduced as a certain functionality of both syntax and semantic point of view, e.g. if a token of output sentence is a verb while it is expected to be a noun, then there shall be a penalty; also, if the POS is the same but the exact word is not the same, e.g. good vs nice, then this candidate shall gain certain credit. The overall score of hLEPOR then is calculated as the combination of word level score and POS level score with a weighting set. Language modelling inspired n-gram knowledge is also extensively explored in nLEPOR.[16][17] In addition to the n-gram knowledge for n-gram position difference penalty calculation, n-gram is also applied to n-gram precision and n-gram recall in nLEPOR, and the parameter n is an adjustable factor. In addition to POS knowledge in hLEPOR, phrase structure from parsing information is included in a new variant HPPR.[18] In HPPR evaluation modeling, the phrase structure set, such as noun phrase, verb phrase, prepositional phrase, adverbial phrase are considered during the matching from candidate text to reference text.

Software Implementation

LEPOR metrics were originally implemented in Perl programming language,[19] and recently the Python version [20] is available by other researchers and engineerings,[21] with a Press [22] announcement from Logrus Global Language Service company.

Performance

LEPOR series have shown their good performances in the ACL's annual international workshop of statistical machine translation (ACL-WMT). ACL-WMT is held by the special interest group of machine translation (SIGMT) in the international association for computational linguistics (ACL). In the ACL-WMT 2013,[23] there are two translation and evaluation tracks, English-to-other and other-to-English. The "other" languages include Spanish, French, German, Czech and Russian. In the English-to-other direction, nLEPOR metric achieves the highest system-level correlation score with human judgments using the Pearson correlation coefficient, the second highest system-level correlation score with human judgments using the Spearman rank correlation coefficient. In the other-to-English direction, nLEPOR performs moderate and METEOR yields the highest correlation score with human judgments, which is due to the fact that nLEPOR only uses the concise linguistic feature, part-of-speech information, except for the officially offered training data; however, METEOR has used many other external resources, such as the synonyms dictionaries, paraphrase, and stemming, etc.

One extended work and introduction about LEPOR's performances with different conditions including pure word-surface form, POS features, phrase tags features, is described in a thesis from University of Macau.[24]

There is a deep statistical analysis about hLEPOR and nLEPOR performance in WMT13, which shows it performed as one of the best metrics "in both the individual language pair assessment for Spanish-to-English and the aggregated set of 9 language pairs.", see the paper (Accurate Evaluation of Segment-level Machine Translation Metrics) "https://www.aclweb.org/anthology/N15-1124" Graham et al. 2015 NAACL(https://github.com/ygraham/segment-mteval)

Applications

LEPOR automatic metric series have been applied and used by many researchers from different fields in Natural language processing. For instance, in standard MT and Neural MT.[25] Also outside of MT community, for instance,[26] applied LEPOR in Search evaluation;[27] mentioned the application of LEPOR for code (programming language) generation evaluation;[28] investigated automatic evaluation of natural language generation [29] with metrics including LEPOR, and argued that automatic metrics can help system level evaluations; also LEPOR is applied in image captioning evaluation.[30]

See also

Notes

  1. ^ Papineni et al., (2002)
  2. ^ Han, (2016)
  3. ^ Banerjee and Lavie, (2005)
  4. ^ Han et al., (2012)
  5. ^ Han et al., (2013a)
  6. ^ Han et al., (2013b)
  7. ^ Han et al., (2014)
  8. ^ Han, (2014)
  9. ^ Graham et al., (2015)
  10. ^ Novikova et al., (2017)
  11. ^ Liu et al., (2021)
  12. ^ Han et al. (2012)
  13. ^ Wong and Kit, (2008)
  14. ^ Han et al. (2013a)
  15. ^ Han (2014)
  16. ^ Han et al. (2013b)
  17. ^ Han (2014)
  18. ^ Han et al. (2013c)
  19. ^ https://github.com/aaronlifenghan/aaron-project-lepor
  20. ^ https://pypi.org/project/hLepor/
  21. ^ https://github.com/lHan87/LEPOR
  22. ^ https://slator.com/press-releases/logrus-global-adds-hlepor-translation-quality-evaluation-metric-python-implementation-on-pypi-org/
  23. ^ ACL-WMT (2013)
  24. ^ Han (2014)
  25. ^ Marzouk and Hansen-Schirra (2019)
  26. ^ Liu et al. (2021)
  27. ^ Liguori et al. (2021)
  28. ^ Novikova et al. (2017)
  29. ^ Celikyilmaz et al. (2020)
  30. ^ Qiu et al. (2020)

References

  • Papineni, K., Roukos, S., Ward, T., and Zhu, W. J. (2002). "BLEU: a method for automatic evaluation of machine translation" in ACL-2002: 40th Annual meeting of the Association for Computational Linguistics pp. 311–318
  • Han, A.L.F., Wong, D.F., and Chao, L.S. (2012) "LEPOR: A Robust Evaluation Metric for Machine Translation with Augmented Factors" in Proceedings of the 24th International Conference on Computational Linguistics (COLING 2012): Posters, pp. 441–450. Mumbai, India. Online paper Open source tool
  • Han, A.L.F., Wong, D.F., Chao, L.S., He, L., Lu, Y., Xing, J., and Zeng, X. (2013a) "Language-independent Model for Machine Translation Evaluation with Reinforced Factors" in Proceedings of the Machine Translation Summit XIV (MT SUMMIT 2013), pp. 215-222. Nice, France. Publisher: International Association for Machine Translation. Online paper Open source tool
  • Han, A.L.F., Wong, D.F., Chao, L.S., Lu, Y., He, L., Wang, Y., and Zhou, J. (2013b) "A Description of Tunable Machine Translation Evaluation Systems in WMT13 Metrics Task" in Proceedings of the Eighth Workshop on Statistical Machine Translation, ACL-WMT13, Sofia, Bulgaria. Association for Computational Linguistics. Online paper pp. 414–421
  • . doi:10.1155/2014/760301. {{cite journal}}: Cite journal requires |journal= (help); Missing or empty |title= (help)CS1 maint: unflagged free DOI (link)
  • ACL-WMT. (2013) "ACL-WMT13 METRICS TASK"
  • Wong, B. T-M, and Kit, C. (2008). "Word choice and word position for automatic MT evaluation" in Workshop: MetricsMATR of the Association for Machine Translation in the Americas (AMTA), short paper, Waikiki, US.
  • Banerjee, S. and Lavie, A. (2005) "METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments" in Proceedings of Workshop on Intrinsic and Extrinsic Evaluation Measures for MT and/or Summarization at the 43rd Annual Meeting of the Association of Computational Linguistics (ACL-2005), Ann Arbor, Michigan, June 2005
  • Han, Lifeng. (2014) "LEPOR: An Augmented Machine Translation Evaluation Metric". Thesis for Master of Science in Software Engineering. University of Macau, Macao. [1] PPT
  • Yvette Graham, Timothy Baldwin, and Nitika Mathur. (2015) Accurate evaluation of segment-level machine translation metrics. In NAACL HLT 2015, The 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Denver, Colorado, USA, May 31 - June 5, 2015, pages 1183–1191.
  • A bot will complete this citation soon. Click here to jump the queue arXiv:1605.04515.
  • Jekaterina Novikova, Ondˇrej Dušek, Amanda Cercas Curry, and Verena Rieser. (2017) Why we need new evaluation metrics for NLG. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2241–2252, Copenhagen, Denmark. Association for Computational Linguistics.
  • A bot will complete this citation soon. Click here to jump the queue arXiv:2104.13453.
  • A bot will complete this citation soon. Click here to jump the queue arXiv:2104.13100.
  • A bot will complete this citation soon. Click here to jump the queue arXiv:2006.14799.
  • D Qiu, B Rothrock, T Islam, AK Didier, VZ Sun… (2020) SCOTI: Science Captioning of Terrain Images for data prioritization and local image search. Planetary and Space. Elsevier
  • . doi:10.1007/s10590-019-09233-w. {{cite journal}}: Cite journal requires |journal= (help); Missing or empty |title= (help)
  • . doi:10.1007/978-3-642-40722-2_13. {{cite book}}: Missing or empty |title= (help)

External links