A Framework for Benchmarking in CBIR
Multimedia Tools and Applications
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
Benchmarking image and video retrieval: an overview
MIR '06 Proceedings of the 8th ACM international workshop on Multimedia information retrieval
Usage-oriented multimedia information retrieval technological evaluation
MIR '06 Proceedings of the 8th ACM international workshop on Multimedia information retrieval
Semantic concept-based query expansion and re-ranking for multimedia retrieval
Proceedings of the 15th international conference on Multimedia
Image retrieval: Ideas, influences, and trends of the new age
ACM Computing Surveys (CSUR)
The MIR flickr retrieval evaluation
MIR '08 Proceedings of the 1st ACM international conference on Multimedia information retrieval
Generic similarity search engine demonstrated by an image retrieval application
Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval
Building a web-scale image similarity search system
Multimedia Tools and Applications
MESSIF: metric similarity search implementation framework
DELOS'07 Proceedings of the 1st international conference on Digital libraries: research and development
Content without context is meaningless
Proceedings of the international conference on Multimedia
TagCaptcha: annotating images with CAPTCHAs
Proceedings of the international conference on Multimedia
Similarity query postprocessing by ranking
AMR'10 Proceedings of the 8th international conference on Adaptive Multimedia Retrieval: context, exploration, and fusion
Content-based annotation and classification framework: a general multi-purpose approach
Proceedings of the 17th International Database Engineering & Applications Symposium
Hi-index | 0.00 |
In all subfields of information retrieval, test datasets and ground truth data are important tools for testing and comparison of new search methods. This is also reflected by the image retrieval community where several benchmarking activities have been created in past years. However, the number of available test collections is still rather small and the existing ones are often limited in size or accessible only to the participants of benchmarking competitions. In this work, we present a new freely-available large-scale dataset for evaluation of content-based image retrieval systems. The dataset consists of 20 million high-quality images with five visual descriptors and rich and systematic textual annotations, a set of 100 test query objects and a semi-automatically collected ground truth data verified by users. Furthermore, we provide services that enable exploitation and collaborative expansion of the ground truth.