Think globally, fit locally: unsupervised learning of low dimensional manifolds
The Journal of Machine Learning Research
Manifold-ranking based image retrieval
Proceedings of the 12th annual ACM international conference on Multimedia
Graph based multi-modality learning
Proceedings of the 13th annual ACM international conference on Multimedia
Unsupervised learning from a corpus for shape-based 3D model retrieval
MIR '06 Proceedings of the 8th ACM international workshop on Multimedia information retrieval
3D model search and retrieval using the spherical trace transform
EURASIP Journal on Applied Signal Processing
Ranking with local regression and global alignment for cross media retrieval
MM '09 Proceedings of the 17th ACM international conference on Multimedia
A 3D Shape Retrieval Framework Supporting Multimodal Queries
International Journal of Computer Vision
Measuring multi-modality similarities via subspace learning for cross-media retrieval
PCM'06 Proceedings of the 7th Pacific Rim conference on Advances in Multimedia Information Processing
SHREC'10 track: generic 3D warehouse
EG 3DOR'10 Proceedings of the 3rd Eurographics conference on 3D Object Retrieval
I-SEARCH: a unified framework for multimodal search and retrieval
The Future Internet
Hi-index | 0.00 |
In this paper a novel approach for multimodal search and retrieval is introduced. The searchable items are media representations consisting of multiple modalities, such as 2D images and 3D objects, which share a common semantic concept. The proposed method combines the low-level feature distances of each separate modality to construct a new low-dimensional feature space, where all media objects are mapped irrespective of their constituting modalities. While most of the existing state-of-the-art approaches support queries of one single modality at a time, the proposed one allows querying with multiple modalities simultaneously, through efficient multimodal query formulation.