Large Scale Tag Recommendation Using Different Image Representations

  • Authors:
  • Rabeeh Abbasi;Marcin Grzegorzek;Steffen Staab

  • Affiliations:
  • ISWeb - Information Systems and Semantic Web, University of Koblenz-Landau, Koblenz, Germany 56070;ISWeb - Information Systems and Semantic Web, University of Koblenz-Landau, Koblenz, Germany 56070;ISWeb - Information Systems and Semantic Web, University of Koblenz-Landau, Koblenz, Germany 56070

  • Venue:
  • SAMT '09 Proceedings of the 4th International Conference on Semantic and Digital Media Technologies: Semantic Multimedia
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Nowadays, geographical coordinates (geo-tags), social annotations (tags), and low-level features are available in large image datasets. In our paper, we exploit these three kinds of image descriptions to suggest possible annotations for new images uploaded to a social tagging system. In order to compare the benefits each of these description types brings to a tag recommender system on its own, we investigate them independently of each other. First, the existing data collection is clustered separately for the geographical coordinates, tags, and low-level features. Additionally, random clustering is performed in order to provide a baseline for experimental results. Once a new image has been uploaded to the system, it is assigned to one of the clusters using either its geographical or low-level representation. Finally, the most representative tags for the resulting cluster are suggested to the user for annotation of the new image. Large-scale experiments performed for more than 400,000 images compare the different image representation techniques in terms of precision and recall in tag recommendation.