A utility-theoretic analysis of expected search length
SIGIR '88 Proceedings of the 11th annual international ACM SIGIR conference on Research and development in information retrieval
Cumulated gain-based evaluation of IR techniques
ACM Transactions on Information Systems (TOIS)
Retrieval evaluation with incomplete information
Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval
Eye-tracking analysis of user behavior in WWW search
Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval
TREC: Experiment and Evaluation in Information Retrieval (Digital Libraries and Electronic Publishing)
An experimental comparison of click position-bias models
WSDM '08 Proceedings of the 2008 International Conference on Web Search and Data Mining
A probability ranking principle for interactive information retrieval
Information Retrieval
A new rank correlation coefficient for information retrieval
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
A new interpretation of average precision
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Rank-biased precision for measurement of retrieval effectiveness
ACM Transactions on Information Systems (TOIS)
Efficient multiple-click models in web search
Proceedings of the Second ACM International Conference on Web Search and Data Mining
Methods for Evaluating Interactive Information Retrieval Systems with Users
Foundations and Trends in Information Retrieval
A user behavior model for average precision and its generalization to graded judgments
Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval
Time-based calibration of effectiveness measures
SIGIR '12 Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval
Stochastic simulation of time-biased gain
Proceedings of the 21st ACM international conference on Information and knowledge management
Model Based Comparison of Discounted Cumulative Gain and Average Precision
Journal of Discrete Algorithms
Users versus models: what observation tells us about effectiveness metrics
Proceedings of the 22nd ACM international conference on Conference on information & knowledge management
Hi-index | 0.00 |
We propose to explain Discounted Cumulative Gain (DCG) as the consequences of a set of hypothesis, in a generative probabilistic model, on how users browse the result page ranking list of a search engine. This exercise of reconstructing a user model from a metric allows us to show that it is possible to estimate from data the numerical values of the discounting factors. It also allows us to compare different candidate user models in terms of their ability to describe the observed data, and hence to select the best one. It is generally not possible to relate the performance of a ranking function in terms of DCG with the clicks observed after the function is deployed on a production environment. We show in this paper that a user model make this possible. Finally, we show that DCG can be interpreted as a measure of the utility a user gains per unit of effort she is ready to allocate. This contrasts nicely with a recent interpretation given to average precision (AP), another popular Information Retrieval metric, as a measure of effort needed to achieve a unit of utility [7].