Bayesian learning of hierarchical multinomial mixture models of concepts for automatic image annotation

  • Authors:
  • Rui Shi;Tat-Seng Chua;Chin-Hui Lee;Sheng Gao

  • Affiliations:
  • School of Computing, National University of Singapore, Singapore;School of Computing, National University of Singapore, Singapore;School of ECE, Georgia Institute of Technology, Atlanta, GA;Institute for Infocomm Research, Singapore

  • Venue:
  • CIVR'06 Proceedings of the 5th international conference on Image and Video Retrieval
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose a novel Bayesian learning framework of hierarchical mixture model by incorporating prior hierarchical knowledge into concept representations of multi-level concept structures in images. Characterizing image concepts by mixture models is one of the most effective techniques in automatic image annotation (AIA) for concept-based image retrieval. However it also poses problems when large-scale models are needed to cover the wide variations in image samples. To alleviate the potential difficulties arising in estimating too many parameters with insufficient training images, we treat the mixture model parameters as random variables characterized by a joint conjugate prior density of the mixture model parameters. This facilitates a statistical combination of the likelihood function of the available training data and the prior density of the concept parameters into a well-defined posterior density whose parameters can now be estimated via a maximum a posteriori criterion. Experimental results on the Corel image dataset with a set of 371 concepts indicate that the proposed Bayesian approach achieved a maximum F1 measure of 0.169, which outperforms many state-of-the-art AIA algorithms.