Multi-modal visual concept classification of images via Markov random walk over tags

  • Authors:
  • Motoaki Kawanabe;Alexander Binder;Christina Muller;Wojciech Wojcikiewicz

  • Affiliations:
  • Fraunhofer Institute FIRST, Kekuléstr. 7, 12489 Berlin, Germany;Technical University of Berlin, Franklinstr. 28 / 29, 10587 Berlin, Germany;Technical University of Berlin, Franklinstr. 28 / 29, 10587 Berlin, Germany;Technical University of Berlin, Franklinstr. 28 / 29, 10587 Berlin, Germany

  • Venue:
  • WACV '11 Proceedings of the 2011 IEEE Workshop on Applications of Computer Vision (WACV)
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Automatic annotation of images is a challenging task in computer vision because of “semantic gap” between highlevel visual concepts and image appearances. Therefore, user tags attached to images can provide further information to bridge the gap, even though they are partially uninformative and misleading. In this work, we investigate multi-modal visual concept classification based on visual features and user tags via kernel-based classifiers. An issue here is how to construct kernels between sets of tags. We deploy Markov random walks on graphs of key tags to incorporate co-occurrence between them. This procedure acts as a smoothing of tag based features. Our experimental result on the ImageCLEF2010 PhotoAnnotation benchmark shows that our proposed method outperforms the baseline relying solely on visual information and a recently published state-of-the-art approach.