Measuring multi-modality similarities via subspace learning for cross-media retrieval

  • Authors:
  • Hong Zhang;Jianguang Weng

  • Affiliations:
  • The Institute of Artificial Intelligence, Zhejiang University, HangZhou, P.R. China;The Institute of Artificial Intelligence, Zhejiang University, HangZhou, P.R. China

  • Venue:
  • PCM'06 Proceedings of the 7th Pacific Rim conference on Advances in Multimedia Information Processing
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Cross-media retrieval is an interesting research problem, which seeks to breakthrough the limitation of modality so that users can query multimedia objects by examples of different modalities. In order to cross-media retrieve, the problem of similarity measure between media objects with heterogeneous low-level features needs to be solved. This paper proposes a novel approach to learn both intra- and inter-media correlations among multi-modality feature spaces, and construct MLE semantic subspace containing multimedia objects of different modalities. Meanwhile, relevance feedback strategies are developed to enhance the efficiency of cross-media retrieval from both short- and long-term perspectives. Experiments show that the result of our approach is encouraging and the performance is effective.