Evaluating evaluation measure stability
SIGIR '00 Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval
A geometric interpretation and analysis of R-precision
Proceedings of the 14th ACM international conference on Information and knowledge management
Rank-biased precision for measurement of retrieval effectiveness
ACM Transactions on Information Systems (TOIS)
Episode-constrained cross-validation in video concept retrieval
IEEE Transactions on Multimedia
Volumetric Features for Video Event Detection
International Journal of Computer Vision
Weighting visual features with pseudo relevance feedback for CBIR
Proceedings of the ACM International Conference on Image and Video Retrieval
Hi-index | 0.00 |
We consider two of the most commonly cited measures of retrieval performance: average precision and R-precision. It is well known that average precision and R-precision are highly correlated and similarly robust measures of performance, though the reasons for this are not entirely clear. In this paper, we give a geometric argument which shows that under a very reasonable set of assumptions, average precision and R-precision both approximate the area under the precision-recall curve, thus explaining their high correlation. We further demonstrate through the use of TREC data that the similarity or difference between average precision and R-precision is largely governed by the adherence to, or violation of, these reasonable assumptions.