Multi-graph enabled active learning for multimodal web image retrieval

  • Authors:
  • Xin-Jing Wang;Wei-Ying Ma;Lei Zhang;Xing Li

  • Affiliations:
  • Tsinghua University, China;Microsoft Research, Asia;Microsoft Research, Asia;Tsinghua University, China

  • Venue:
  • Proceedings of the 7th ACM SIGMM international workshop on Multimedia information retrieval
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we propose a multimodal Web image retrieval technique based on multi-graph enabled active learning. The main goal is to leverage the heterogeneous data on the Web to improve retrieval precision. Three graphs are constructed on images' content features, textual annotations and hyperlinks respectively, namely Content-Graph, Text-Graph and Link-Graph, which provide complimentary information on the images. By analyzing the three graphs, a training dataset is automatically created and transductive learning is enabled. The transductive learner is a multi-graph based classifier, which simultaneously solves the learning problem and the problem of combining heterogeneous data. This proposed approach, overall, tackles the problem of unsupervised active learning on Web graph. Although the proposed approach is discussed in the context of WWW image retrieval, it can be applied to other domains. The experimental results show the effectiveness of our approach.