Jump to content

ROUGE (metric)

From Wikipedia, the free encyclopedia

This is the current revision of this page, as edited by Ethan801 (talk | contribs) at 01:53, 28 November 2023 (Added short additional detail about scoring). The present address (URL) is a permanent link to this version.

(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

ROUGE, or Recall-Oriented Understudy for Gisting Evaluation,[1] is a set of metrics and a software package used for evaluating automatic summarization and machine translation software in natural language processing. The metrics compare an automatically produced summary or translation against a reference or a set of references (human-produced) summary or translation. ROUGE metrics range between 0 and 1, with higher scores indicating higher similarity between the automatically produced summary and the reference.

Metrics

[edit]

The following five evaluation metrics are available.

  • ROUGE-N: Overlap of n-grams[2] between the system and reference summaries.
    • ROUGE-1 refers to the overlap of unigrams (each word) between the system and reference summaries.
    • ROUGE-2 refers to the overlap of bigrams between the system and reference summaries.
  • ROUGE-L: Longest Common Subsequence (LCS)[3] based statistics. Longest common subsequence problem takes into account sentence-level structure similarity naturally and identifies longest co-occurring in sequence n-grams automatically.
  • ROUGE-W: Weighted LCS-based statistics that favors consecutive LCSes.
  • ROUGE-S: Skip-bigram[3] based co-occurrence statistics. Skip-bigram is any pair of words in their sentence order.
  • ROUGE-SU: Skip-bigram plus unigram-based co-occurrence statistics.

See also

[edit]

References

[edit]
[edit]