Empirical investigations on benchmark tasks for automatic image annotation

  • Authors:
  • Ville Viitaniemi;Jorma Laaksonen

  • Affiliations:
  • Adaptive Informatics Research Centre, Helsinki University of Technology, Finland;Adaptive Informatics Research Centre, Helsinki University of Technology, Finland

  • Venue:
  • VISUAL'07 Proceedings of the 9th international conference on Advances in visual information systems
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Automatic image annotation aims at labeling images with keywords. In this paper we investigate three annotation benchmark tasks used in literature to evaluate annotation systems' performance. We empirically compare the first two of the tasks, the 5000 Corel images and the Corel categories tasks, by applying a family of annotation system configurations derived from our PicSOM image content analysis framework. We establish an empirical correspondence of performance levels in the tasks by studying the performance of our system configurations, along with figures presented in literature. We also consider ImageCLEF 2006 Object Annotation Task that has earlier been found difficult. By experimenting with the data, we gain insight into the reasons that make the ImageCLEF task difficult. In the course of our experiments, we demonstrate that in these three tasks the PicSOM system--based on fusion of numerous global image features--outperforms the other considered annotation methods.