Communications of the ACM - Homeland security
Automatic evaluation of summaries using N-gram co-occurrence statistics
NAACL '03 Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1
Manual and automatic evaluation of summaries
AS '02 Proceedings of the ACL-02 Workshop on Automatic Summarization - Volume 4
The potential and limitations of automatic sentence extraction for summarization
HLT-NAACL-DUC '03 Proceedings of the HLT-NAACL 03 on Text summarization workshop - Volume 5
GA, MR, FFNN, PNN and GMM based models for automatic text summarization
Computer Speech and Language
Using semantic information to answer complex questions
Canadian AI'11 Proceedings of the 24th Canadian conference on Advances in artificial intelligence
Text summarisation in progress: a literature review
Artificial Intelligence Review
An aspect-driven random walk model for topic-focused multi-document summarization
AIRS'11 Proceedings of the 7th Asia conference on Information Retrieval Technology
Improving the performance of the reinforcement learning model for answering complex questions
Proceedings of the 21st ACM international conference on Information and knowledge management
Hi-index | 0.00 |
We show some limitations of the ROUGE evaluation method for automatic summarization. We present a method for automatic summarization based on a Markov model of the source text. By a simple greedy word selection strategy, summaries with high ROUGE-scores are generated. These summaries would however not be considered good by human readers. The method can be adapted to trick different settings of the ROUGEeval package.