Multimodal search and retrieval using manifold learning and query formulation

  • Authors:
  • Apostolos Axenopoulos;Stavroula Manolopoulou;Petros Daras

  • Affiliations:
  • Informatics and Telematics Institute, Hellas;Informatics and Telematics Institute, Hellas;Informatics and Telematics Institute, Hellas

  • Venue:
  • Proceedings of the 16th International Conference on 3D Web Technology
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper a novel approach for multimodal search and retrieval is introduced. The searchable items are media representations consisting of multiple modalities, such as 2D images and 3D objects, which share a common semantic concept. The proposed method combines the low-level feature distances of each separate modality to construct a new low-dimensional feature space, where all media objects are mapped irrespective of their constituting modalities. While most of the existing state-of-the-art approaches support queries of one single modality at a time, the proposed one allows querying with multiple modalities simultaneously, through efficient multimodal query formulation.