Graph based multi-modality learning

  • Authors:
  • Hanghang Tong;Jingrui He;Mingjing Li;Changshui Zhang;Wei-Ying Ma

  • Affiliations:
  • Tsinghua University, Beijing, China;Tsinghua University, Beijing, China;Microsoft Research Asia, Beijing, China;Tsinghua University, Beijing, China;Microsoft Research Asia, Beijing, China

  • Venue:
  • Proceedings of the 13th annual ACM international conference on Multimedia
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

To better understand the content of multimedia, a lot of research efforts have been made on how to learn from multi-modal feature. In this paper, it is studied from a graph point of view: each kind of feature from one modality is represented as one independent graph; and the learning task is formulated as inferring from the constraints in every graph as well as supervision information (if available). For semi-supervised learning, two different fusion schemes, namely linear form and sequential form, are proposed. For each scheme, it is derived from optimization point of view; and further justified from two sides: similarity propagation and Bayesian interpretation. By doing so, we reveal the regular optimization nature, transductive learning nature as well as prior fusion nature of the proposed schemes, respectively. Moreover, the proposed method can be easily extended to unsupervised learning, including clustering and embedding. Systematic experimental results validate the effectiveness of the proposed method.