Automatic evaluation of summaries using N-gram co-occurrence statistics
NAACL '03 Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1
The Pyramid Method: Incorporating human content selection variation in summarization evaluation
ACM Transactions on Speech and Language Processing (TSLP)
A knowledge induced graph-theoretical model for extract and abstract single document summarization
CICLing'13 Proceedings of the 14th international conference on Computational Linguistics and Intelligent Text Processing - Volume 2
Hi-index | 0.00 |
The use of predefined phrase patterns like: N-grams (N=2), longest common sub sequences or pre defined linguistic patterns etc do not give any credit to non-matching/smaller-size useful patterns and thus, may result in loss of information. Next, the use of 1-gram based model results in several noisy matches. Additionally, due to presence of more than one topic with different levels of importance in summary, we consider summarization evaluation task as topic based evaluation of information content. Means at first stage, we identify the topics covered in given model/reference summary and calculate their importance. At the next stage, we calculate the information coverage in test /machine generated summary, w.r.t. every identified topic. We introduce a graph based mapping scheme and the concept of closeness centrality measure to calculate the information depth and sense of the co-occurring words in every identified topic. Our experimental results show that devised system is better than/comparable with best results of TAC 2011 AESOP dataset.