Evaluation axes for medical image retrieval systems: the imageCLEF experience
Proceedings of the 13th annual ACM international conference on Multimedia
Features for image retrieval: an experimental comparison
Information Retrieval
Weighted pseudo-metric for a fast CBIR method
Machine Graphics & Vision International Journal
An Interactive Image Feature Visualization System for Supporting CBIR Study
ICIAR '09 Proceedings of the 6th International Conference on Image Analysis and Recognition
Logistic regression models for a fast CBIR method based on feature selection
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Content-based image retrieval by indexing random subwindows with randomized trees
ACCV'07 Proceedings of the 8th Asian conference on Computer vision - Volume Part II
Sparse patch-histograms for object classification in cluttered images
DAGM'06 Proceedings of the 28th conference on Pattern Recognition
Adapted vocabularies for generic visual categorization
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part IV
FIRE – flexible image retrieval engine: ImageCLEF 2004 evaluation
CLEF'04 Proceedings of the 5th conference on Cross-Language Evaluation Forum: multilingual Information Access for Text, Speech and Images
Hi-index | 0.00 |
A major problem in the field of content-based image retrieval is the lack of a common performance measure which allows the researcher to compare different image retrieval systems in a quantitative and objective manner. We analyze different proposed performance evaluation measures, select an appropriate one, and give quantitative results for four different, freely available image retrieval tasks using combinations of features. This work gives a concrete starting point for the comparison of content-based image retrieval systems. An appropriate performance measure and a set of databases are proposed and results for different retrieval methods are given.