Web image annotation by fusing visual features and textual information

  • Authors:
  • Vincent. S. Tseng;Ja-Hwung Su;Bo-Wen Wang;Yu-Ming Lin

  • Affiliations:
  • National Cheng Kung University, Tainan, Taiwan, R.O.C.;National Cheng Kung University, Tainan, Taiwan, R.O.C.;National Cheng Kung University, Tainan, Taiwan, R.O.C.;National Cheng Kung University, Tainan, Taiwan, R.O.C.

  • Venue:
  • Proceedings of the 2007 ACM symposium on Applied computing
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we propose a novel web image annotation method, namely FMD (Fused annotation by Mixed model graph and Decision tree), which combines visual features and textual information to conceptualize the web images. The FMD approach consists of three main processes: 1) construct the visual-based model, namely ModelMMG, 2) construct the textual-based model, namely ModelDT, and 3) fuse ModelMMG and ModelDT as ModelFMD for annotating the images. The purpose of visual-based annotation model is to objectify the image not only by the global content of the image but also by its local content of composing objects. The textual-based annotation model is to handle the problems of user-specified dependency of keywords and the complex computation due to high dimensionalities in text features. The experimental results reveal that the proposed FMD method is very effective for web image annotation in terms of accuracy through the integration of two different types of features.