SIGIR '02 Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval
A multi-system analysis of document and term selection for blind feedback
Proceedings of the thirteenth ACM international conference on Information and knowledge management
Flexible pseudo-relevance feedback via selective sampling
ACM Transactions on Asian Language Information Processing (TALIP)
A similarity measure for indefinite rankings
ACM Transactions on Information Systems (TOIS)
Score aggregation techniques in retrieval experimentation
ADC '09 Proceedings of the Twentieth Australasian Conference on Australasian Database - Volume 92
Predicting the performance of recommender systems: an information theoretic approach
ICTIR'11 Proceedings of the Third international conference on Advances in information retrieval theory
Combining pre-retrieval query quality predictors using genetic programming
Applied Intelligence
Hi-index | 0.00 |
A new measure, AnchorMap, is introduced to evaluate how close two document retrieval rankings are to each other. It is shown that AnchorMap scores, when run on a set of initial ranked document lists from 8 different systems, are very highly correlated with categorization of topics as easy or hard, and separately, are highly correlated with those topics on which blind feedback works. In another experiment, AnchorMap is used to compare the initial ranked document list from a single system against the ranked document list from that system after blind feedback. Again, high AnchorMap values are highly correlated with both topic difficulty and successful application of blind feedback. Both experiments are examples of using properties of a topic which are independent of relevance information to predict the actual performance of IR systems on the topic. Initial experiments to attempt to improve retrieval performance based upon AnchorMap failed; the causes for failure are discussed.