GEMS: generative modeling for evaluation of summaries

  • Authors:
  • Rahul Katragadda

  • Affiliations:
  • Language Technologies Research Center, IIIT, Hyderabad

  • Venue:
  • CICLing'10 Proceedings of the 11th international conference on Computational Linguistics and Intelligent Text Processing
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Automated evaluation is crucial in the context of automated text summaries, as is the case with evaluation of any of the language technologies. In this paper we present a Generative Modeling framework for evaluation of content of summaries. We used two simple alternatives to identifying signature-terms from the reference summaries based on model consistency and Parts-Of-Speech (POS) features. By using a Generative Modeling approach we capture the sentence level presence of these signature-terms in peer summaries. We show that parts-of-speech such as noun and verb, give simple and robust method to signature-term identification for the Generative Modeling approach. We also show that having a large set of 'significant signature-terms' is better than a small set of ‘strong signature-terms' for our approach. Our results show that the generative modeling approach is indeed promising — providing high correlations with manual evaluations — and further investigation of signature-term identification methods would obtain further better results. The efficacy of the approach can be seen from its ability to capture ‘overall responsiveness' much better than the state-of-the-art in distinguishing a human from a system.