The use of MMR, diversity-based reranking for reordering documents and producing summaries
Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval
Cumulated gain-based evaluation of IR techniques
ACM Transactions on Information Systems (TOIS)
Beyond independent relevance: methods and evaluation metrics for subtopic retrieval
Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval
A survey on the use of relevance feedback for information access systems
The Knowledge Engineering Review
Simple BM25 extension to multiple weighted fields
Proceedings of the thirteenth ACM international conference on Information and knowledge management
Less is more: probabilistic models for retrieving fewer relevant documents
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
HLT '02 Proceedings of the second international conference on Human Language Technology Research
Learning diverse rankings with multi-armed bandits
Proceedings of the 25th international conference on Machine learning
Predicting diverse subsets using structural SVMs
Proceedings of the 25th international conference on Machine learning
Novelty and diversity in information retrieval evaluation
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Proceedings of the Second ACM International Conference on Web Search and Data Mining
Search Engines: Information Retrieval in Practice
Search Engines: Information Retrieval in Practice
Probabilistic models of ranking novel documents for faceted topic retrieval
Proceedings of the 18th ACM conference on Information and knowledge management
Hi-index | 0.00 |
We present a method that introduces diversity into document retrieval using clusters of top-m terms obtained from the top-k retrieved documents through pseudo-relevance feedback. Terms from each cluster are used to automatically expand the original query. We evaluate the effectiveness of our method using a non-traditional effectiveness evaluation method, which directly measures the level of diversification by computing the cosine similarity between top-k retrieved documents based on (i) the original query and (ii) the expanded queries. Our results indicate that we can increase diversity without compromising retrieval quality.