Exploiting the entire feature space with sparsity for automatic image annotation

  • Authors:
  • Zhigang Ma;Yi Yang;Feiping Nie;Jasper Uijlings;Nicu Sebe

  • Affiliations:
  • University of Trento, Trento, Italy;Carnegie Mellon University, Pittsburgh, PA, USA;University of Texas at Arlington, Arlington, TX, USA;University of Trento, Trento, Italy;University of Trento, Trento, Italy

  • Venue:
  • MM '11 Proceedings of the 19th ACM international conference on Multimedia
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

The explosive growth of digital images requires effective methods to manage these images. Among various existing methods, automatic image annotation has proved to be an important technique for image management tasks, e.g., image retrieval over large-scale image databases. Automatic image annotation has been widely studied during recent years and a considerable number of approaches have been proposed. However, the performance of these methods is yet to be satisfactory, thus demanding more effort on research of image annotation. In this paper, we propose a novel semi supervised framework built upon feature selection for automatic image annotation. Our method aims to jointly select the most relevant features from all the data points by using a sparsity-based model and exploiting both labeled and unlabeled data to learn the manifold structure. Our framework is able to simultaneously learn a robust classifier for image annotation by selecting the discriminating features related to the semantic concepts. To solve the objective function of our framework, we propose an efficient iterative algorithm. Extensive experiments are performed on different real-world image datasets with the results demonstrating the promising performance of our framework for automatic image annotation.