Large-scale image annotation using visual synset

  • Authors:
  • David Tsai; Yushi Jing; Yi Liu;Henry A. Rowley;Sergey Ioffe;James M. Rehg

  • Affiliations:
  • Computational Perception Lab, School of Interactive Computing, Georgia Institute of Technology, Atlanta, USA;Google Research, Mountain View, CA, USA;Google Research, Mountain View, CA, USA;Google Research, Mountain View, CA, USA;Google Research, Mountain View, CA, USA;Computational Perception Lab, School of Interactive Computing, Georgia Institute of Technology, Atlanta, USA

  • Venue:
  • ICCV '11 Proceedings of the 2011 International Conference on Computer Vision
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

We address the problem of large-scale annotation of web images. Our approach is based on the concept of visual synset, which is an organization of images which are visually-similar and semantically-related. Each visual synset represents a single prototypical visual concept, and has an associated set of weighted annotations. Linear SVM's are utilized to predict the visual synset membership for unseen image examples, and a weighted voting rule is used to construct a ranked list of predicted annotations from a set of visual synsets. We demonstrate that visual synsets lead to better performance than standard methods on a new annotation database containing more than 200 million im- ages and 300 thousand annotations, which is the largest ever reported