Empirically designing and evaluating a new revision-based model for summary generation
Artificial Intelligence - Special volume on empirical methods
The decomposition of human-written summary sentences
Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval
Cut and paste based text summarization
NAACL 2000 Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference
Improving summaries by revising them
ACL '99 Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics
Sentence Fusion for Multidocument News Summarization
Computational Linguistics
Revisions that improve cohesion in multi-document summaries: a preliminary study
AS '02 Proceedings of the ACL-02 Workshop on Automatic Summarization - Volume 4
Japanese dependency analysis using cascaded chunking
COLING-02 proceedings of the 6th conference on Natural language learning - Volume 20
Sentence fusion via dependency graph compression
EMNLP '08 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Framework for abstractive summarization using text-to-text generation
MTTG '11 Proceedings of the Workshop on Monolingual Text-To-Text Generation
Fully abstractive approach to guided summarization
ACL '12 Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers - Volume 2
Hi-index | 0.00 |
We propose a method of revising lead sentences in a news broadcast. Unlike many other methods proposed so far, this method does not use the coreference relation of noun phrases (NPs) but rather, insertion and substitution of the phrases modifying the same head chunk in lead and other sentences. The method borrows an idea from the sentence fusion methods and is more general than those using NP coreferencing as ours includes them. We show in experiments the method was able to find semantically appropriate revisions thus demonstrating its basic feasibility. We also show that that parsing errors mainly degraded the sentential completeness such as grammaticality and redundancy.