Textual entailment

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Textual entailment (TE) in natural language processing is a directional relation between text fragments. The relation holds whenever the truth of one text fragment follows from another text. In the TE framework, the entailing and entailed texts are termed text (t) and hypothesis (h), respectively. Textual entailment is not the same as pure logical entailment- it has a more relaxed definition: "t entails h" (th) if, typically, a human reading t would infer that h is most likely true.[1] The relation is directional because even if "t entails h", the reverse "h entails t" is much less certain.[2][3]

Examples[edit]

Textual entailment can be illustrated with examples of three different relations:[4]

An example of a positive TE (text entails hypothesis) is:

  • text: If you help the needy, God will reward you.
hypothesis: Giving money to a poor man has good consequences.

An example of a negative TE (text contradicts hypothesis) is:

  • text: If you help the needy, God will reward you.
hypothesis: Giving money to a poor man has no consequences.

An example of a non-TE (text does not entail nor contradict) is:

  • text: If you help the needy, God will reward you.
hypothesis: Giving money to a poor man will make you a better person.

Ambiguity of natural language[edit]

A characteristic of natural language is that there are many different ways to state what you want to say: several meanings can be contained in a single text and that the same meaning can be expressed by different texts. This variability of semantic expression can be seen as the dual problem of language ambiguity. Together they result in a many-to-many mapping between language expressions and meanings. The task of paraphrasing involves recognizing when two texts have the same meaning and creating a similar or shorter text that conveys almost the same information. Textual entailment is similar[5] but weakens the relationship to be unidirectional. Mathematical solutions to establish textual entailment can be based on the directional property of this relation, by making a comparison between some directional similarities of the texts involved.[3]

Approaches[edit]

Textual entailment measures natural language understanding as it asks for a semantic interpretation of the text, and due to its generality remains an active area of research. Many approaches and refinements of approaches have been considered, such as word embedding, logical models, graphical models, rule systems, contextual focusing, and machine learning.[5] Practical or large-scale solutions avoid these complex methods and instead use only surface syntax or lexical relationships, but are correspondingly less accurate.[2] However, even state-of-the-art systems are still far from human performance; a study found humans to be in agreement on the dataset 95.25% of the time,[6] while algorithms from 2016 had not yet achieved 90%.[7]

Applications[edit]

Many natural language processing applications, like Question Answering (QA), Information Extraction (IE), (multi-document) summarization and machine translation (MT) evaluation, need to recognize that a particular target meaning can be inferred from different text variants. Typically entailment is used as part of a larger system, for example in a prediction system to filter out trivial or obvious predictions.[8]

References[edit]

  1. ^ Ido Dagan, Oren Glickman and Bernardo Magnini. The PASCAL Recognising Textual Entailment Challenge, p. 2 in: Quiñonero-Candela, J.; Dagan, I.; Magnini, B.; d'Alché-Buc, F. (Eds.) Machine Learning Challenges. Lecture Notes in Computer Science , Vol. 3944, pp. 177-190, Springer, 2006.
  2. ^ a b Dagan, I. and O. Glickman. 'Probabilistic textual entailment: Generic applied modeling of language variability' in: PASCAL Workshop on Learning Methods for Text Understanding and Mining (2004) Grenoble.
  3. ^ a b Tătar, D. e.a. Textual Entailment as a Directional Relation
  4. ^ Textual Entailment Portal on the Association for Computational Linguistics wiki
  5. ^ a b Androutsopoulos, Ion; Malakasiotis, Prodromos (18 December 2009). "A Survey of Paraphrasing and Textual Entailment Methods" (PDF). Journal of Artificial Intelligence Research. doi:10.1613/jair.2985 (inactive 2017-03-04). Retrieved 13 February 2017. 
  6. ^ Bos, Johan; Markert, Katja (1 January 2005). "Recognising Textual Entailment with Logical Inference" (PDF). Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing. Association for Computational Linguistics: 628–635. doi:10.3115/1220575.1220654. Retrieved 13 February 2017. 
  7. ^ Zhao, Kai; Huang, Liang; Ma, Mingbo (4 January 2017). "Textual Entailment with Structured Attentions and Composition". arXiv:1701.01126Freely accessible [cs.CL]. 
  8. ^ Shani, Ayelett (25 October 2013). "How Dr. Kira Radinsky Used Algorithms to Predict Riots in Egypt". Haaretz. Retrieved 13 February 2017. 

External links[edit]