Automatic combination of multiple ranked retrieval systems
SIGIR '94 Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval
Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval
A formal approach to score normalization for meta-search
HLT '02 Proceedings of the second international conference on Human Language Technology Research
Robust test collections for retrieval evaluation
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
A support vector method for optimizing average precision
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
Adaptive relevance feedback in information retrieval
Proceedings of the 18th ACM conference on Information and knowledge management
Ranking structured documents: a large margin based approach for patent prior art search
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Here or there: preference judgments for relevance
ECIR'08 Proceedings of the IR research, 30th European conference on Advances in information retrieval
Research methodology in studies of assessor effort for information retrieval evaluation
Large Scale Semantic Access to Content (Text, Image, Video, and Sound)
Learning to rank for why-question answering
Information Retrieval
Competition-based user expertise score estimation
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
SRbench--a benchmark for soundtrack recommendation systems
Proceedings of the 22nd ACM international conference on Conference on information & knowledge management
Hi-index | 0.00 |
We introduce a novel approach to combining rankings from multiple retrieval systems. We use a logistic regression model or an SVM to learn a ranking from pairwise document preferences. Our approach requires no training data or relevance scores, and outperforms a popular voting algorithm.