Classification and Automatic Annotation Extension of Images Using Bayesian Network

  • Authors:
  • Sabine Barrat;Salvatore Tabbone

  • Affiliations:
  • LORIA-UMR 7503, University of Nancy 2, BP 239, Vandœuvre-lés-Nancy, France 54506;LORIA-UMR 7503, University of Nancy 2, BP 239, Vandœuvre-lés-Nancy, France 54506

  • Venue:
  • SSPR & SPR '08 Proceedings of the 2008 Joint IAPR International Workshop on Structural, Syntactic, and Statistical Pattern Recognition
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

In many vision problems, instead of having fully annotated training data, it is easier to obtain just a subset of data with annotations, because it is less restrictive for the user. For this reason, in this paper, we consider especially the problem of classifying weakly-annotated images, where just a small subset of the database is annotated with keywords. In this paper we present and evaluate a new method which improves the effectiveness of content-based image classification, by integrating semantic concepts extracted from text, and by automatically extending annotations to the images with missing keywords. Our model is inspired from the probabilistic graphical model theory: we propose a hierarchical mixture model which enables to handle missing values. Results of visual-textual classification, reported on a database of images collected from the Web, partially and manually annotated, show an improvement by 32.3% in terms of recognition rate against only visual information classification. Besides the automatic annotation extension with our model for images with missing keywords outperforms the visual-textual classification by 6.8%. Finally the proposed method is experimentally competitive with the state-of-art classifiers.