Empirical methods for artificial intelligence
Empirical methods for artificial intelligence
The use of MMR, diversity-based reranking for reordering documents and producing summaries
Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval
Auto-summarization of audio-video presentations
MULTIMEDIA '99 Proceedings of the seventh ACM international conference on Multimedia (Part 1)
Comparing presentation summaries: slides vs. reading vs. listening
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Time is of the essence: an evaluation of temporal compression algorithms
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
The effect of speech recognition accuracy rates on the usefulness and usability of webcast archives
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Extrinsic Summarization Evaluation: A Decision Audit Task
MLMI '08 Proceedings of the 5th international workshop on Machine Learning for Multimodal Interaction
Catchup: a useful application of time-travel in meetings
Proceedings of the 2010 ACM conference on Computer supported cooperative work
Exploring Correlation Between ROUGE and Human Evaluation on Meeting Summaries
IEEE Transactions on Audio, Speech, and Language Processing
Hi-index | 0.00 |
There is little evidence of widespread adoption of speech summarization systems. This may be due in part to the fact that the natural language heuristics used to generate summaries are often optimized with respect to a class of evaluation measures that, while computationally and experimentally inexpensive, rely on subjectively selected gold standards against which automatically generated summaries are scored. This evaluation protocol does not take into account the usefulness of a summary in assisting the listener in achieving his or her goal. In this paper we study how current measures and methods for evaluating summarization systems compare to human-centric evaluation criteria. For this, we have designed and conducted an ecologically valid evaluation that determines the value of a summary when embedded in a task, rather than how closely a summary resembles a gold standard. The results of our evaluation demonstrate that in the domain of lecture summarization, the wellknown baseline of maximal marginal relevance (Carbonell and Goldstein, 1998) is statistically significantly worse than human-generated extractive summaries, and even worse than having no summary at all in a simple quiz-taking task. Priming seems to have no statistically significant effect on the usefulness of the human summaries. In addition, ROUGE scores and, in particular, the contextfree annotations that are often supplied to ROUGE as references, may not always be reliable as inexpensive proxies for ecologically valid evaluations. In fact, under some conditions, relying exclusively on ROUGE may even lead to scoring human-generated summaries that are inconsistent in their usefulness relative to using no summaries very favourably.