Improving retrieval feedback with multiple term-ranking function combination
ACM Transactions on Information Systems (TOIS)
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Selecting good expansion terms for pseudo-relevance feedback
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Improved latent concept expansion using hierarchical markov random fields
CIKM '10 Proceedings of the 19th ACM international conference on Information and knowledge management
A boosting approach to improving pseudo-relevance feedback
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
A Survey of Automatic Query Expansion in Information Retrieval
ACM Computing Surveys (CSUR)
Learning-Based pseudo-relevance feedback for patent retrieval
IRFC'12 Proceedings of the 5th conference on Multidisciplinary Information Retrieval
Exploiting External Collections for Query Expansion
ACM Transactions on the Web (TWEB)
High performance query expansion using adaptive co-training
Information Processing and Management: an International Journal
Estimating topical context by diverging from external resources
Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval
Selecting effective expansion terms for diversity
Proceedings of the 10th Conference on Open Research Areas in Information Retrieval
Unsupervised latent concept modeling to identify query facets
Proceedings of the 10th Conference on Open Research Areas in Information Retrieval
Hi-index | 0.00 |
Pseudo-relevance feedback finds useful expansion terms from a set of top-ranked documents. It is often crucial to identify those good feedback documents from which useful expansion terms can be added to the query. In this paper, we propose to detect good feedback documents by classifying all feedback documents using a variety of features such as the distribution of query terms in the feedback document, the similarity between a single feedback document and all top-ranked documents, or the proximity between the expansion terms and the original query terms in the feedback document. By doing this, query expansion is only performed using a selected set of feedback documents, which are predicted to be good among all top-ranked documents. Experimental results on standard TREC test data show that query expansion on the selected feedback documents achieves statistically significant improvements over a strong pseudo-relevance feedback mechanism, which expands the query using all top-ranked documents.