A critical investigation of recall and precision as measures of retrieval system performance
ACM Transactions on Information Systems (TOIS)
The pragmatics of information retrieval experimentation, revisited
Information Processing and Management: an International Journal - Special issue on evaluation issues in information retrieval
A probabilistic solution to the selection and fusion problem in distributed information retrieval
Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval
Query refinement for multimedia similarity retrieval in MARS
MULTIMEDIA '99 Proceedings of the seventh ACM international conference on Multimedia (Part 1)
Performance evaluation in content-based image retrieval: overview and proposals
Pattern Recognition Letters - Special issue on image/video indexing and retrieval
Information Retrieval
The Truth about Corel - Evaluation in Image Retrieval
CIVR '02 Proceedings of the International Conference on Image and Video Retrieval
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hi-index | 0.00 |
The performance of a Content-Based Image Retrieval (CBIR) system presented in the form of Precision-Recall or Precision-Scope graphs offers an incomplete overview of the system under study: the influence of the irrelevant items is obscured. In this paper, we propose a comprehensive and well normalized description of the ranking performance compared to the performance of an Ideal Retrieval System defined by ground-truth for a large number of predefined queries. We advocate normalization with respect to relevant class size and restriction to specific normalized scope values. We also propose new performance graphs for total recall studies in a range of embeddings.