Automatic Image Annotation Using Global and Local Features

  • Authors:
  • Mária Bieliková;Eduard Kuric

  • Affiliations:
  • -;-

  • Venue:
  • SMAP '11 Proceedings of the 2011 Sixth International Workshop on Semantic Media Adaptation and Personalization
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Automatic image annotation methods require a quality training image dataset, from which annotations for target images are obtained. At present, the main problem with these methods is their low effectiveness and scalability if a large-scale training dataset is used. Current methods use only global image features for search. We proposed a method to obtain annotations for target images, which is based on a novel combination of local and global features during search stage. We are able to ensure the robustness and generalization needed by complex queries and significantly eliminate irrelevant results. In our method, in analogy with text documents, the global features represent words extracted from paragraphs of a document with the highest frequency of occurrence and the local features represent key words extracted from the entire document. We are able to identify objects directly in target images and for each obtained annotation we estimate the probability of its relevance. During search, we retrieve similar images containing the correct keywords for a given target image. For example, we prioritize images where extracted objects of interest from the target images are dominant as it is more likely that words associated with the images describe the objects. We tailored our method to use large-scale image training datasets and evaluated it with the Corel5K corpus which consists of 5000 images from 50 Corel Stock Photo CDs.