Monte Carlo methods. Vol. 1: basics
Monte Carlo methods. Vol. 1: basics
A general language model for information retrieval (poster abstract)
Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval
Advanced Engineering Mathematics: Maple Computer Guide
Advanced Engineering Mathematics: Maple Computer Guide
SIGIR '02 Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval
Combining document representations for known-item search
Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval
Using temporal profiles of queries for precision prediction
Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval
Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval
Predicting query difficulty on the web by learning visual clues
Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval
Document quality models for web ad hoc retrieval
Proceedings of the 14th ACM international conference on Information and knowledge management
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
On ranking the effectiveness of searches
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
Ranking robustness: a novel framework to predict query performance
CIKM '06 Proceedings of the 15th ACM international conference on Information and knowledge management
Towards a graph-based user profile modeling for a session-based personalized search
Knowledge and Information Systems
Hi-index | 0.00 |
We introduce the notion of ranking robustness, which refers to a property of a ranked list of documents that indicates how stable the ranking is in the presence of uncertainty in the ranked documents. We propose a statistical measure called the robustness score to quantify this notion. Our initial motivation for measuring ranking robustness is to predict topic difficulty for content-based queries in the ad-hoc retrieval task. Our results demonstrate that the robustness score is positively and consistently correlation with average precision of content-based queries across a variety of TREC test collections. Though our focus is on prediction under the ad-hoc retrieval task, we observe an interesting negative correlation with query performance when our technique is applied to named-page finding queries, which are a fundamentally different kind of queries. A side effect of this different behavior of the robustness score between the two types of queries is that the robustness score is also found to be a good feature for query classification.