Baselines for Image Annotation

  • Authors:
  • Ameesh Makadia;Vladimir Pavlovic;Sanjiv Kumar

  • Affiliations:
  • Google Research, New York, USA 10011;Rutgers University, Piscataway, USA 08854;Google Research, New York, USA 10011

  • Venue:
  • International Journal of Computer Vision
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Automatically assigning keywords to images is of great interest as it allows one to retrieve, index, organize and understand large collections of image data. Many techniques have been proposed for image annotation in the last decade that give reasonable performance on standard datasets. However, most of these works fail to compare their methods with simple baseline techniques to justify the need for complex models and subsequent training. In this work, we introduce a new and simple baseline technique for image annotation that treats annotation as a retrieval problem. The proposed technique utilizes global low-level image features and a simple combination of basic distance measures to find nearest neighbors of a given image. The keywords are then assigned using a greedy label transfer mechanism. The proposed baseline method outperforms the current state-of-the-art methods on two standard and one large Web dataset. We believe that such a baseline measure will provide a strong platform to compare and better understand future annotation techniques.