Semantic keyword-based retrieval of photos taken with mobile devices

  • Authors:
  • Windson Viana;Samira Hammiche;Bogdan Moisuc;Marlène Villanova-Oliver;Jérôme Gensel;Hervé Martin

  • Affiliations:
  • LIG - Laboratoire Informatique de Grenoble, Saint Martin d'Hères, France;LIG - Laboratoire Informatique de Grenoble, Saint Martin d'Hères, France;LIG - Laboratoire Informatique de Grenoble, Saint Martin d'Hères, France;LIG - Laboratoire Informatique de Grenoble, Saint Martin d'Hères, France;LIG - Laboratoire Informatique de Grenoble, Saint Martin d'Hères, France;LIG - Laboratoire Informatique de Grenoble, Saint Martin d'Hères, France

  • Venue:
  • Proceedings of the 6th International Conference on Advances in Mobile Computing and Multimedia
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents an approach for incorporating contextual metadata in a keyword-based photo retrieval process. We use our mobile annotation system PhotoMap in order to create metadata describing the photo shoot context (e.g., street address, nearby objects, season, lighting, nearby people...). These metadata are then used to generate a set of stamped words for indexing each photo. We adapt the Vector Space Model (VSM) in order to transform these shoot context words into document-vector terms. Furthermore, spatial reasoning is used for inferring new potential indexing terms. We define methods for weighting those terms and for handling a query matching. We also detail retrieval experiments carried out by using PhotoMap and Flickr geotagged photos. We illustrate the advantages of using Wikipedia georeferenced objects for indexing photos.