Automatic combination of multiple ranked retrieval systems
SIGIR '94 Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval
Combining multiple evidence from different properties of weighting schemes
SIGIR '95 Proceedings of the 18th annual international ACM SIGIR conference on Research and development in information retrieval
Incorporating different search models into one document retrieval system
SIGIR '81 Proceedings of the 4th annual international ACM SIGIR conference on Information storage and retrieval: theoretical issues in information retrieval
SIGIR '02 Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval
An efficient boosting algorithm for combining preferences
The Journal of Machine Learning Research
ProbFuse: a probabilistic approach to data fusion
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
FRank: a ranking method with fidelity loss
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
An outranking approach for rank aggregation in information retrieval
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
LambdaMerge: merging the results of query reformulations
Proceedings of the fourth ACM international conference on Web search and data mining
Query performance prediction for IR
SIGIR '12 Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval
Predicting query performance for fusion-based retrieval
Proceedings of the 21st ACM international conference on Information and knowledge management
Hi-index | 0.00 |
Combining evidence from multiple retrieval models has been widely studied in the context of of distributed search, metasearch and rank fusion. Much of the prior work has focused on combining retrieval scores (or the rankings) assigned by different retrieval models or ranking algorithms. In this work, we focus on the problem of choosing between retrieval models using performance estimation. We propose modeling the differences in retrieval performance directly by using rank-time features - features that are available to the ranking algorithms - and the retrieval scores assigned by the ranking algorithms. Our experimental results show that when choosing between two rankers, our approach yields significant improvements over the best individual ranker.