Summarization beyond sentence extraction: a probabilistic approach to sentence compression
Artificial Intelligence
Statistics-Based Summarization - Step One: Sentence Compression
Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence
SUMMAC: a text summarization evaluation
Natural Language Engineering
Sentence reduction for automatic text summarization
ANLC '00 Proceedings of the sixth conference on Applied natural language processing
A noisy-channel model for document compression
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
NAACL '03 Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1
Evaluation metrics for generation
INLG '00 Proceedings of the first international conference on Natural language generation - Volume 14
Improving summarization performance by sentence compression: a pilot study
AsianIR '03 Proceedings of the sixth international workshop on Information retrieval with Asian languages - Volume 11
Hedge Trimmer: a parse-and-trim approach to headline generation
HLT-NAACL-DUC '03 Proceedings of the HLT-NAACL 03 on Text summarization workshop - Volume 5
ACL-44 Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics
Trimming CFG parse trees for sentence compression using machine learning approaches
COLING-ACL '06 Proceedings of the COLING/ACL on Main conference poster sessions
Sentence compression beyond word deletion
COLING '08 Proceedings of the 22nd International Conference on Computational Linguistics - Volume 1
Summarization with a joint model for sentence extraction and compression
ILP '09 Proceedings of the Workshop on Integer Linear Programming for Natural Langauge Processing
Global inference for sentence compression an integer linear programming approach
Journal of Artificial Intelligence Research
Sentence compression as tree transduction
Journal of Artificial Intelligence Research
A comparison of model free versus model intensive approaches to sentence compression
EMNLP '09 Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1 - Volume 1
Shared-task evaluations in HLT: lessons for NLG
INLG '06 Proceedings of the Fourth International Natural Language Generation Conference
GENEVAL: a proposal for shared-task evaluation in NLG
INLG '06 Proceedings of the Fourth International Natural Language Generation Conference
An extractive supervised two-stage method for sentence compression
HLT '10 Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics
Title generation with quasi-synchronous grammar
EMNLP '10 Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing
Paraphrastic sentence compression with a character-based metric: tightening without deletion
MTTG '11 Proceedings of the Workshop on Monolingual Text-To-Text Generation
Paraphrastic sentence compression with a character-based metric: tightening without deletion
MTTG '11 Proceedings of the Workshop on Monolingual Text-To-Text Generation
Learning sentential paraphrases from bilingual parallel corpora for text-to-text generation
EMNLP '11 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Midge: generating image descriptions from computer vision detections
EACL '12 Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics
A two-step approach to sentence compression of spoken utterances
ACL '12 Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers - Volume 2
Hi-index | 0.00 |
This work surveys existing evaluation methodologies for the task of sentence compression, identifies their shortcomings, and proposes alternatives. In particular, we examine the problems of evaluating paraphrastic compression and comparing the output of different models. We demonstrate that compression rate is a strong predictor of compression quality and that perceived improvement over other models is often a side effect of producing longer output.