Constructing literature abstracts by computer: techniques and prospects
Information Processing and Management: an International Journal - Special issue on natural language processing and information retrieval
Advances in Automatic Text Summarization
Advances in Automatic Text Summarization
Improving summaries by revising them
ACL '99 Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics
FNDS: a dialogue-based system for accessing digested financial news
Journal of Systems and Software
User-model based personalized summarization
Information Processing and Management: an International Journal
Generating basic skills reports for low-skilled readers*
Natural Language Engineering
The REG summarization system with question reformulation at QA@INEX track 2010
INEX'10 Proceedings of the 9th international conference on Initiative for the evaluation of XML retrieval: comparative evaluation of focused retrieval
Hi-index | 0.00 |
In this paper, we first experimentally investigated the factors that make extracts hard to read. We did this by having human subjects try to revise extracts to produce more readable ones. We then classified the factors into five, most of which are related to cohesion, after which we devised revision rules for each factor, and partially implemented a system that revises extracts.