What is a complete set of keywords for image description & annotation on the web

  • Authors:
  • Xianming Liu;Hongxun Yao;Rongrong Ji;Pengfei Xu;Xiaoshuai Sun

  • Affiliations:
  • Harbin Institute of Technology, Harbin, China;Harbin Institute of Technology, Harbin, China;Harbin Institute of Technology, Harbin, China;Harbin Institute of Technology, Harbin, China;Harbin Institute of Technology, Harbin, China

  • Venue:
  • MM '09 Proceedings of the 17th ACM international conference on Multimedia
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Does there exist a compact set of keywords that can completely and effectively cover the image annotation problem by expanding from it? In this paper, we answer this question by presenting a complete set framework for image annotation, which is motivated by the existence of semantic ontology. To generate this set, we propose a cross model optimization strategy from both textual and visual information for topic decomposition, based on a so-called Bipartite LSA model, which minimize multimodal error energy functions in a probabilistic Latent Semantic Analysis model. To achieve complete set based annotation, we present a Gaussian-Kernel-Generative process based keyword generation procedure, which analogizes keyword annotation in a probabilistic generative manner. A group of experiments is performed on Washington University image database and 80,000 Flickr images with comparisons to the state-of-the-arts. Finally, potential advantages and future improvements of our framework are discussed outside the scope of topic modeling.