How reliable are the results of large-scale information retrieval experiments?
Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval
ICDAR 2003 Robust Reading Competitions
ICDAR '03 Proceedings of the Seventh International Conference on Document Analysis and Recognition - Volume 2
Cross-Language Evaluation Forum: Objectives, Results, Achievements
Information Retrieval
TREC: Experiment and Evaluation in Information Retrieval (Digital Libraries and Electronic Publishing)
Object count/area graphs for the evaluation of object detection and segmentation algorithms
International Journal on Document Analysis and Recognition
The CLEF 2004 cross-language image retrieval track
CLEF'04 Proceedings of the 5th conference on Cross-Language Evaluation Forum: multilingual Information Access for Text, Speech and Images
Benchmarking image and video retrieval: an overview
MIR '06 Proceedings of the 8th ACM international workshop on Multimedia information retrieval
Human performance measures for video retrieval
MIR '06 Proceedings of the 8th ACM international workshop on Multimedia information retrieval
Effects of audio and visual surrogates for making sense of digital video
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Performance evaluation of relevance feedback methods
CIVR '08 Proceedings of the 2008 international conference on Content-based image and video retrieval
A survey of methods for image annotation
Journal of Visual Languages and Computing
The MIR flickr retrieval evaluation
MIR '08 Proceedings of the 1st ACM international conference on Multimedia information retrieval
Evaluation platform for content-based image retrieval systems
TPDL'11 Proceedings of the 15th international conference on Theory and practice of digital libraries: research and advanced technology for digital libraries
Apples to oranges: evaluating image annotations from natural language processing systems
NAACL HLT '12 Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Hi-index | 0.00 |
Shared evaluation tasks have become popular over the last decades as ways of making communities of researchers advance together. This paper presents the organization of five new shared task evaluation campaigns for image indexing and retrieval. We have designed these campaigns based on our previous experience of participating in or organizing various text retrieval campaigns such as TREC, AMARYLLIS and CLEF. Our purpose behind these campaigns is to minimize the gap between technology evaluation and user-oriented evaluation in the field of information retrieval.