The TIPSTER SUMMAC Text Summarization Evaluation
EACL '99 Proceedings of the ninth conference on European chapter of the Association for Computational Linguistics
The automated acquisition of topic signatures for text summarization
COLING '00 Proceedings of the 18th conference on Computational linguistics - Volume 1
Topic-focused multi-document summarization using an approximate oracle score
COLING-ACL '06 Proceedings of the COLING/ACL on Main conference poster sessions
Mind the gap: dangers of divorcing evaluations of summary content from linguistic quality
COLING '08 Proceedings of the 22nd International Conference on Computational Linguistics - Volume 1
Arabic/English multi-document summarization with CLASSY: the past and the future
CICLing'08 Proceedings of the 9th international conference on Computational linguistics and intelligent text processing
Summary evaluation: together we stand NPowER-ed
CICLing'13 Proceedings of the 14th international conference on Computational Linguistics and Intelligent Text Processing - Volume 2
Hi-index | 0.00 |
An update summary should provide a fluent summarization of new information on a time-evolving topic, assuming that the reader has already reviewed older documents or summaries. In 2007 and 2008, an annual summarization evaluation included an update summarization task. Several participating systems produced update summaries indistinguishable from human-generated summaries when measured using ROUGE. However, no machine system performed near human-level performance in manual evaluations such as pyramid and overall responsiveness scoring. We present a metric called Nouveau-ROUGE that improves correlation with manual evaluation metrics and can be used to predict both the pyramid score and overall responsiveness for update summaries. Nouveau-ROUGE can serve as a less expensive surrogate for manual evaluations when comparing existing systems and when developing new ones.