Exploring correlation between ROUGE and human evaluation on meeting summaries

  • Authors:
  • Feifan Liu;Yang Liu

  • Affiliations:
  • Department of Computer Science, The University of Texas, Dallas, Richardson, TX;Department of Computer Science, The University of Texas, Dallas, Richardson, TX

  • Venue:
  • IEEE Transactions on Audio, Speech, and Language Processing
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Automatic summarization evaluation is very important to the development of summarization systems. In text summarization, ROUGE has been shown to correlate well with human evaluation when measuring match of content units. However, there are many characteristics of the multiparty meeting domain, which may pose potential problems to ROUGE. The goal of this paper is to examine howwell theROUGEscores correlate with human evaluation for extractive meeting summarization, and explore different meeting domain specific factors that have an impact on the correlation. More analysis than those in our previous work [1] has been conducted in this study. Our experiments show that generally the correlation between ROUGE and human evaluation is not great; however, when accounting for several unique meeting characteristics, such as disfluencies, speaker information, and stopwords in the ROUGE setting, better correlation can be achieved, especially on the system summaries. We also found that these factors have a different impact on human versus system summaries. In addition, we contrast the results using ROUGE with other automatic summarization evaluation metrics, such as Kappa and Pyramid, and show the appropriateness of using ROUGE for this study.