The anatomy of a large-scale hypertextual Web search engine
WWW7 Proceedings of the seventh international conference on World Wide Web 7
Using random walks for question-focused sentence retrieval
HLT '05 Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing
Using Cross-Document Random Walks for Topic-Focused Multi-Document
WI '06 Proceedings of the 2006 IEEE/WIC/ACM International Conference on Web Intelligence
AdaSum: an adaptive model for summarization
Proceedings of the 17th ACM conference on Information and knowledge management
Using query expansion in graph-based approach for query-focused multi-document summarization
Information Processing and Management: an International Journal
Topic-driven multi-document summarization with encyclopedic knowledge and spreading activation
EMNLP '08 Proceedings of the Conference on Empirical Methods in Natural Language Processing
LexRank: graph-based lexical centrality as salience in text summarization
Journal of Artificial Intelligence Research
Manifold-ranking based topic-focused multi-document summarization
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
An integrated multi-document summarization approach based on word hierarchical representation
ACLShort '09 Proceedings of the ACL-IJCNLP 2009 Conference Short Papers
Applying regression models to query-focused multi-document summarization
Information Processing and Management: an International Journal
Hi-index | 0.00 |
Query focused summarization is the task of producing a compressed text of original set of documents based on a query. Documents can be viewed as graph with sentences as nodes and edges can be added based on sentence similarity. Graph based ranking algorithms which use 'Biased random surfer model' like topic-sensitive LexRank have been successfully applied to query focused summarization. In these algorithms, random walk will be biased towards the sentences which contain query relevant words. Specifically, it is assumed that random surfer knows the query relevance score of the sentence to where he jumps. However, neighbourhood information of the sentence to where he jumps is completely ignored. In this paper, we propose look-ahead version of topic-sensitive LexRank. We assume that random surfer not only knows the query relevance of the sentence to where he jumps but he can also look N-step ahead from that sentence to find query relevance scores of future set of sentences. Using this look ahead information, we figure out the sentences which are indirectly related to the query by looking at number of hops to reach a sentence which has query relevant words. Then we make the random walk biased towards even to the indirect query relevant sentences along with the sentences which have query relevant words. Experimental results show 20.2% increase in ROUGE-2 score compared to topic-sensitive LexRank on DUC 2007 data set. Further, our system outperforms best systems in DUC 2006 and results are comparable to state of the art systems.