ImageCLEF@ICPR Contest: Challenges, Methodologies and Results of the Photo Annotation Task

  • Authors:
  • Stefanie Nowak

  • Affiliations:
  • -

  • Venue:
  • ICPR '10 Proceedings of the 2010 20th International Conference on Pattern Recognition
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

The Photo Annotation Task is performed as one task in the Image CLEF@ICPR contest and poses the challenge to annotate 53 visual concepts in Flickr photos. Altogether 12 research teams met the multilabel classification challenge and submitted solutions. The participants were provided with a training and a validation set consisting of 5,000 and 3,000 annotated images, respectively. The test was performed on 10,000 images. Two evaluation paradigms have been applied, the evaluation per concept and the evaluation per example. The evaluation per concept was performed by calculating the Equal Error Rate and the Area Under Curve (AUC). The evaluation per example utilizes a recently proposed Ontology Score. For the concepts, an average AUC of 86.5% could be achieved, including concepts with an AUC of 96%. The classification performance for each image ranged between 59% and 100% with an average score of 85%.