Overview of the photo annotation task in imageCLEF@ICPR

  • Authors:
  • Stefanie Nowak

  • Affiliations:
  • Audio-Visual Systems, Fraunhofer IDMT, Ilmenau, Germany

  • Venue:
  • ICPR'10 Proceedings of the 20th International conference on Recognizing patterns in signals, speech, images, and videos
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

The Photo Annotation Task poses the challenge for automated annotation of 53 visual concepts in Flickr photos and was organized as part of the ImageCLEF@ICPR contest. In total, 12 research teams participated in the multilabel classification challenge while initially 17 research groups were interested and got access to the data. The participants were provided with a training set of 5,000 Flickr images with annotations, a validation set of 3,000 Flickr images with annotations and the test was performed on 10,000 Flickr images. The evaluation was carried out twofold: first the evaluation per concept was conducted by utilizing the Equal Error Rate (EER) and the Area Under Curve (AUC) and second the evaluation per example was performed with the Ontology Score (OS). Summarizing the results, an average AUC of 86.5% could be achieved, including concepts with an AUC of 96%. The classification performance for each image ranged between 59% and 100% with an average score of 85%. In comparison to the results achieved in Image-CLEF 2009, the detection performance increased for the concept-based evaluation by 2.2% EER and 2.5% AUC and showed a slight decrease for the example-based evaluation.