Evaluation strategies for image understanding and retrieval
Proceedings of the 7th ACM SIGMM international workshop on Multimedia information retrieval
Efficient benchmarking of content-based image retrieval via resampling
MULTIMEDIA '06 Proceedings of the 14th annual ACM international conference on Multimedia
Diversifying the image retrieval results
MULTIMEDIA '06 Proceedings of the 14th annual ACM international conference on Multimedia
A survey of content-based image retrieval with high-level semantics
Pattern Recognition
Image retrieval: Ideas, influences, and trends of the new age
ACM Computing Surveys (CSUR)
Features for image retrieval: an experimental comparison
Information Retrieval
Object retrieval using configurations of salient regions
CIVR '08 Proceedings of the 2008 international conference on Content-based image and video retrieval
A statistical image retrieval method using color invariant
CIRA'09 Proceedings of the 8th IEEE international conference on Computational intelligence in robotics and automation
Semantic analysis and retrieval in personal and social photo collections
Multimedia Tools and Applications
Image retrieval with semantic sketches
INTERACT'11 Proceedings of the 13th IFIP TC 13 international conference on Human-computer interaction - Volume Part I
A novel approach using edge detection information for texture based image retrieval
ICNC'06 Proceedings of the Second international conference on Advances in Natural Computation - Volume Part II
Aerial photo image retrieval using adaptive image classification
KES'06 Proceedings of the 10th international conference on Knowledge-Based Intelligent Information and Engineering Systems - Volume Part III
Automatic image annotation using semantic relevance
Proceedings of the Fifth International Conference on Internet Multimedia Computing and Service
Hi-index | 0.00 |
We present a comprehensive strategy for evaluating image retrieval algorithms. Because automated image retrieval is only meaningful in its service to people, performance characterization must be grounded in human evaluation. Thus we have collected a large data set of human evaluations of retrieval results, both for query by image example and query by text. The data is independent of any particular image retrieval algorithm and can be used to evaluate and compare many such algorithms without further data collection. The data and calibration software are available on-line (http://kobus.ca/research/data). We develop and validate methods for generating sensible evaluation data, calibrating for disparate evaluators, mapping image retrieval system scores to the human evaluation results, and comparing retrieval systems. We demonstrate the process by providing grounded comparison results for several algorithms.