Cumulated gain-based evaluation of IR techniques
ACM Transactions on Information Systems (TOIS)
Learning user interaction models for predicting web search result preferences
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
Minimal test collections for retrieval evaluation
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
Evaluating the accuracy of implicit feedback from clicks and query reformulations in Web search
ACM Transactions on Information Systems (TOIS)
How well does result relevance predict session satisfaction?
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
Active exploration for learning rankings from clickthrough data
Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining
A user browsing model to predict search engine click data from past observations.
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
A simple and efficient sampling method for estimating AP and NDCG
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Design trade-offs for search engine caching
ACM Transactions on the Web (TWEB)
Rank-biased precision for measurement of retrieval effectiveness
ACM Transactions on Information Systems (TOIS)
How does clickthrough data reflect retrieval quality?
Proceedings of the 17th ACM conference on Information and knowledge management
Are click-through data adequate for learning web search rankings?
Proceedings of the 17th ACM conference on Information and knowledge management
Efficient multiple-click models in web search
Proceedings of the Second ACM International Conference on Web Search and Data Mining
A dynamic bayesian network click model for web search ranking
Proceedings of the 18th international conference on World wide web
Click chain model in web search
Proceedings of the 18th international conference on World wide web
PSkip: estimating relevance ranking quality from web search clickthrough data
Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining
Learning more powerful test statistics for click-based retrieval evaluation
Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval
Measuring the agreement among relevance judges
MIRA'99 Proceedings of the 1999 international conference on Final Mira
Hi-index | 0.00 |
Various click models have been recently proposed as a principled approach to infer the relevance of documents from the clickthrough data. The inferred document relevance is potentially useful in evaluating the Web retrieval systems. In practice, it generally requires to acquire the accurate evaluation results within minimal users' query submissions. This problem is important for speeding up search engine development and evaluation cycle and acquiring reliable evaluation results on tail queries. In this paper, we propose a reordering framework for efficient evaluation problem in the context of clickthrough based Web retrieval evaluation. The main idea is to move up the documents that contribute more for the evaluation task. In this framework, we propose four intuitions and formulate them as an optimization problem. Both user study and TREC data based experiments validate that the reordering framework results in much fewer query submissions to get accurate evaluation results with only a little harm to the users' utility.