Geo-based automatic image annotation

  • Authors:
  • Hatem Mousselly Sergieh;Gabriele Gianini;Mario Döller;Harald Kosch;Elöd Egyed-Zsigmond;Jean-Marie Pinon

  • Affiliations:
  • INSA de Lyon, Villeurbanne, France;University of Milan, Italy;University of Passau, Passau, Germany;University of Passau, Passau, Germany;INSA de Lyon, Villeurbanne, France;INSA de Lyon, Villeurbanne, France

  • Venue:
  • Proceedings of the 2nd ACM International Conference on Multimedia Retrieval
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

A huge number of user-tagged images are daily uploaded to the web. Recently, a growing number of those images are also geotagged. These provide new opportunities for solutions to automatically tag images so that efficient image management and retrieval can be achieved. In this paper an automatic image annotation approach is proposed. It is based on a statistical model that combines two different kinds of information: high level information represented by user tags of images captured in the same location as a new unlabeled image (input image); and low level information represented by the visual similarity between the input image and the collection of geographically similar images. To maximize the number of images that are visually similar to the input image, an iterative visual matching approach is proposed and evaluated. The results show that a significant recall improvement can be achieved with an increasing number of iterations. The quality of the recommended tags has also been evaluated and an overall good performance has been observed.