Support vector description of clusters for content-based image annotation

  • Authors:
  • Liang Sun;Hongwei Ge;Shinichi Yoshida;Yanchun Liang;Guozhen Tan

  • Affiliations:
  • -;-;-;-;-

  • Venue:
  • Pattern Recognition
  • Year:
  • 2014

Quantified Score

Hi-index 0.01

Visualization

Abstract

Continual progress in the fields of computer vision and machine learning has provided opportunities to develop automatic tools for tagging images; this facilitates searching and retrieving. However, due to the complexity of real-world image systems, effective and efficient image annotation is still a challenging problem. In this paper, we present an annotation technique based on the use of image content and word correlations. Clusters of images with manually tagged words are used as training instances. Images within each cluster are modeled using a kernel method, in which the image vectors are mapped to a higher-dimensional space and the vectors identified as support vectors are used to describe the cluster. To measure the extent of the association between an image and a model described by support vectors, the distance from the image to the model is computed. A closer distance indicates a stronger association. Moreover, word-to-word correlations are also considered in the annotation framework. To tag an image, the system predicts the annotation words by using the distances from the image to the models and the word-to-word correlations in a unified probabilistic framework. Simulated experiments were conducted on three benchmark image data sets. The results demonstrate the performance of the proposed technique, and compare it to the performance of other recently reported techniques.