A Bayesian approach integrating regional and global features for image semantic learning

  • Authors:
  • Luong-Dong Nguyen;Ghim-Eng Yap;Ying Liu;Ah-Hwee Tan;Liang-Tien Chia;Joo-Hwee Lim

  • Affiliations:
  • School of Computer Engineering, Nanyang Technological University, Singapore;Institute for Infocomm Research, Singapore;School of Computer Engineering, Nanyang Technological University, Singapore;School of Computer Engineering, Nanyang Technological University, Singapore;School of Computer Engineering, Nanyang Technological University, Singapore;Institute for Infocomm Research, Singapore

  • Venue:
  • ICME'09 Proceedings of the 2009 IEEE international conference on Multimedia and Expo
  • Year:
  • 2009

Quantified Score

Hi-index 0.02

Visualization

Abstract

In content-based image retrieval, the "semantic gap" between visual image features and user semantics makes it hard to predict abstract image categories from low-level features. We present a hybrid system that integrates global features (G-features) and region features (R-features) for predicting image semantics. As an intermediary between image features and categories, we introduce the notion of mid-level concepts, which enables us to predict an image's category in three steps. First, a G-prediction system uses G-features to predict the probability of each category for an image. Simultaneously, a R-prediction system analyzes R-features to identify the probabilities of mid-level concepts in that image. Finally, our hybrid H-prediction system based on a Bayesian network reconciles the predictions from both R-prediction and G-prediction to produce the final classifications. Results of experimental validations show that this hybrid system outperforms both G-prediction and R-prediction significantly.