Automatic image annotation with cooperation of concept-specific and universal visual vocabularies

  • Authors:
  • Yanjie Wang;Xiabi Liu;Yunde Jia

  • Affiliations:
  • Beijing Laboratory of Intelligent Information Technology, School of Computer Science, Beijing Institute of Technology, Beijing, P.R. China;Beijing Laboratory of Intelligent Information Technology, School of Computer Science, Beijing Institute of Technology, Beijing, P.R. China;Beijing Laboratory of Intelligent Information Technology, School of Computer Science, Beijing Institute of Technology, Beijing, P.R. China

  • Venue:
  • MMM'10 Proceedings of the 16th international conference on Advances in Multimedia Modeling
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper proposes an automatic image annotation method based on concept-specific image representation and discriminative learning. Firstly, the concept-specific visual vocabularies are generated by assuming that localized features from the images with a specific concept are of the distribution of Gaussian Mixture Model (GMM). Each component in the GMM is taken as a visual token of the concept. The visual tokens of all the concepts are clustered to obtain a universal token set. Secondly, the image is represented as a concept-specific feature vector by computing the average posterior probabilities of being each universal visual token for all the localized features and assigning it to corresponding concept-specific visual tokens. Thus the feature vector for an image varies with different concepts. Finally, we implement image annotation and retrieval under a discriminative learning framework of Bayesian classifiers, Max-Min posterior Pseudo-probabilities (MMP). The proposed method were evaluated on the popular Corel-5K database. The experimental results with comparisons to state-of-the-art show that our method is promising.