A critical investigation of recall and precision as measures of retrieval system performance
ACM Transactions on Information Systems (TOIS)
On selecting a measure of retrieval effectiveness. Part I.
Readings in information retrieval
Text retrieval and filtering: analytic models of performance
Text retrieval and filtering: analytic models of performance
When information retrieval measures agree about the relative quality of document rankings
Journal of the American Society for Information Science
Evaluating evaluation measure stability
SIGIR '00 Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval
Modern Information Retrieval
Current Status of the Evaluation of Information Retrieval
Journal of Medical Systems
A geometric interpretation of r-precision and its correlation with average precision
Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval
Automatically selecting shots for action movie trailers
MIR '06 Proceedings of the 8th ACM international workshop on Multimedia information retrieval
PS-GIS: personalized and semantics-based grid information services
Proceedings of the 2nd international conference on Scalable information systems
Helping editors choose better seed sets for entity set expansion
Proceedings of the 18th ACM conference on Information and knowledge management
Score distribution models: assumptions, intuition, and robustness to score manipulation
Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval
Research methodology in studies of assessor effort for information retrieval evaluation
Large Scale Semantic Access to Content (Text, Image, Video, and Sound)
Evaluation of system measures for incomplete relevance judgment in IR
FQAS'06 Proceedings of the 7th international conference on Flexible Query Answering Systems
Mining microblogs to infer music artist similarity and cultural listening patterns
Proceedings of the 21st international conference companion on World Wide Web
Measuring the ability of score distributions to model relevance
AIRS'11 Proceedings of the 7th Asia conference on Information Retrieval Technology
Predicting query performance directly from score distributions
AIRS'11 Proceedings of the 7th Asia conference on Information Retrieval Technology
Modelling Score Distributions Without Actual Scores
Proceedings of the 2013 Conference on the Theory of Information Retrieval
Document Score Distribution Models for Query Performance Inference and Prediction
ACM Transactions on Information Systems (TOIS)
Hi-index | 0.00 |
Average precision and R-precision are two of the most commonly cited measures of overall retrieval performance, but their correlation, though well-known, has defied explanation. We recently devised a geometric interpretation of R-precision which suggests that under a reasonable set of assumptions, R-precision approximates the area under the precision-recall curve, as does average precision, thus explaining their correlation. In this paper, we consider these assumptions and our geometric interpretation of R-precision in order to further understand, and make reasonable use of, the information that R-precision provides. Given our geometric interpretation of R-precision, we show that R-precision is highly informative by demonstrating that it can be used to (1) accurately infer precision-recall curves, (2) accurately infer other measures of retrieval performance, and (3) devise new measures of retrieval performance. Through our analysis, we also state the conditions under which R-precision is informative.