Generating summaries of multiple news articles
SIGIR '95 Proceedings of the 18th annual international ACM SIGIR conference on Research and development in information retrieval
WordNet: a lexical database for English
Communications of the ACM
The use of MMR, diversity-based reranking for reordering documents and producing summaries
Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval
Towards multidocument summarization by reformulation: progress and prospects
AAAI '99/IAAI '99 Proceedings of the sixteenth national conference on Artificial intelligence and the eleventh Innovative applications of artificial intelligence conference innovative applications of artificial intelligence
Training a selection function for extraction
Proceedings of the eighth international conference on Information and knowledge management
New Methods in Automatic Extracting
Journal of the ACM (JACM)
Identifying topics by position
ANLC '97 Proceedings of the fifth conference on Applied natural language processing
Data Mining: Concepts and Techniques
Data Mining: Concepts and Techniques
Manual and automatic evaluation of summaries
AS '02 Proceedings of the ACL-02 Workshop on Automatic Summarization - Volume 4
A SVM-Based Ensemble Approach to Multi-Document Summarization
Canadian AI '09 Proceedings of the 22nd Canadian Conference on Artificial Intelligence: Advances in Artificial Intelligence
The automatic creation of literature abstracts
IBM Journal of Research and Development
Machine-made index for technical literature: an experiment
IBM Journal of Research and Development
Automatic text summarization based on lexical chains
ICNC'05 Proceedings of the First international conference on Advances in Natural Computation - Volume Part I
Hi-index | 0.00 |
Automatic Text Summarization is a specialized text mining task of generating a summary or abstract from single or multiple input text documents. Various heuristic and semi-supervised learning methods have been explored by researchers in this field to generate generic as well as user-oriented summaries. This paper examines the effectiveness of well-known summarization heuristics when applied to the task of generating single-document summary extracts of variable length. For evaluating the quality of the summaries, the original text documents and their summaries were scored by different human judges based on soft metrics like topic-coverage, relative coherence, novelty and information content; and their scores were statistically compared. It was experimentally verified that in 65% of the documents there was less than 10% variance between the scores assigned to the original texts and their summaries.