Genetic Algorithms in Search, Optimization and Machine Learning
Genetic Algorithms in Search, Optimization and Machine Learning
Modern Information Retrieval
Summarization beyond sentence extraction: a probabilistic approach to sentence compression
Artificial Intelligence
Multidocument summarization: An added value to clustering in interactive retrieval
ACM Transactions on Information Systems (TOIS)
Information fusion in the context of multi-document summarization
ACL '99 Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics
Centroid-based summarization of multiple documents
Information Processing and Management: an International Journal
Automatic evaluation of summaries using N-gram co-occurrence statistics
NAACL '03 Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1
Traveling among clusters: a way to reconsider the benefits of the cluster hypothesis
Proceedings of the 2010 ACM Symposium on Applied Computing
Query-oriented clustering: a multi-objective approach
Proceedings of the 2010 ACM Symposium on Applied Computing
A new approach to improving multilingual summarization using a genetic algorithm
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
Use of genetic algorithm for cohesive summary extraction to assist reading difficulties
Applied Computational Intelligence and Soft Computing
Hi-index | 0.00 |
The multi-document summarizer using genetic algorithm-based sentence extraction (SBGA) regards summarization process as an optimization problem where the optimal summary is chosen among a set of summaries formed by the conjunction of the original articles sentences. To solve the NP hard optimization problem, SBGA adopts genetic algorithm, which can choose the optimal summary on global aspect. To improve the accuracy of term frequency, SBGA employs a novel method TFS, which takes word sense into account while calculating term frequency. The experiments on DUC04 data show that our strategy is effective and the ROUGE-1 score is only 0.55% lower than the best participant in DUC04.