Enhancing image annotation by integrating concept ontology and text-based bayesian learning model

  • Authors:
  • Rui Shi;Chin-Hui Lee;Tat-Seng Chua

  • Affiliations:
  • National University of Singapore, Singapore;Georgia Institute of Technology, Atlanta, GA;National University of Singapore, Singapore

  • Venue:
  • Proceedings of the 15th international conference on Multimedia
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Automatic image annotation (AIA) has been a hot research topic in recent years since it can be used to support concept-based image retrieval. However, most existing AIA models depend heavily on the availability of a large number of labeled training samples, which require significant human labeling efforts. In this paper, we propose a novel learning framework which integrates text-based Bayesian model (TBM) and concept ontology to effectively expand the training set of each concept class without the need of additional human labeling efforts or collecting additional training images from other data sources. The basic idea lies in exploiting the text information from training set to provide additional effective annotations for training images so that training data for each concept class can be augmented. In this study we employ Bayesian Hierarchical Multinomial Mixture Models (BHMMMs) as our baseline AIA model. By combining additional annotations obtained from TBM into each concept class in the training phase, the performance of BHMMMs can be significantly improved on Corel image dataset with 263 testing concepts as compared to the state-of-the-art AIA models under the same experimental configurations.