Introduction to Information Retrieval
Introduction to Information Retrieval
Random k-Labelsets: An Ensemble Method for Multilabel Classification
ECML '07 Proceedings of the 18th European conference on Machine Learning
The MIR flickr retrieval evaluation
MIR '08 Proceedings of the 1st ACM international conference on Multimedia information retrieval
Performance measures for multilabel evaluation: a case study in the area of image classification
Proceedings of the international conference on Multimedia information retrieval
The visual concept detection task in ImageCLEF 2008
CLEF'08 Proceedings of the 9th Cross-language evaluation forum conference on Evaluating systems for multilingual and multimodal information access
The University of Aamsterdam's concept detection system at ImageCLEF 2009
CLEF'09 Proceedings of the 10th international conference on Cross-language evaluation forum: multimedia experiments
UAIC at ImageCLEF 2009 photo annotation task
CLEF'09 Proceedings of the 10th international conference on Cross-language evaluation forum: multimedia experiments
The university of surrey visual concept detection system at imageCLEF@ICPR: working notes
ICPR'10 Proceedings of the 20th International conference on Recognizing patterns in signals, speech, images, and videos
The 2005 PASCAL visual object classes challenge
MLCW'05 Proceedings of the First international conference on Machine Learning Challenges: evaluating Predictive Uncertainty Visual Object Classification, and Recognizing Textual Entailment
A picture is worth a thousand tags: automatic web based image tag expansion
ACCV'12 Proceedings of the 11th Asian conference on Computer Vision - Volume Part II
Hi-index | 0.00 |
The Photo Annotation Task poses the challenge for automated annotation of 53 visual concepts in Flickr photos and was organized as part of the ImageCLEF@ICPR contest. In total, 12 research teams participated in the multilabel classification challenge while initially 17 research groups were interested and got access to the data. The participants were provided with a training set of 5,000 Flickr images with annotations, a validation set of 3,000 Flickr images with annotations and the test was performed on 10,000 Flickr images. The evaluation was carried out twofold: first the evaluation per concept was conducted by utilizing the Equal Error Rate (EER) and the Area Under Curve (AUC) and second the evaluation per example was performed with the Ontology Score (OS). Summarizing the results, an average AUC of 86.5% could be achieved, including concepts with an AUC of 96%. The classification performance for each image ranged between 59% and 100% with an average score of 85%. In comparison to the results achieved in Image-CLEF 2009, the detection performance increased for the concept-based evaluation by 2.2% EER and 2.5% AUC and showed a slight decrease for the example-based evaluation.