Content-Based Video Indexing and Retrieval
IEEE MultiMedia
Content-Based Classification, Search, and Retrieval of Audio
IEEE MultiMedia
Search for Multi-modality Data in Digital Libraries
PCM '01 Proceedings of the Second IEEE Pacific Rim Conference on Multimedia: Advances in Multimedia Information Processing
Learning an image manifold for retrieval
Proceedings of the 12th annual ACM international conference on Multimedia
Content-based music structure analysis with applications to music semantics understanding
Proceedings of the 12th annual ACM international conference on Multimedia
Efficient content-based retrieval of motion capture data
ACM SIGGRAPH 2005 Papers
Understanding multimedia document semantics for cross-media retrieval
PCM'05 Proceedings of the 6th Pacific-Rim conference on Advances in Multimedia Information Processing - Volume Part I
ClassView: hierarchical video shot classification, indexing, and accessing
IEEE Transactions on Multimedia
CBSA: content-based soft annotation for multimodal image retrieval using Bayes point machines
IEEE Transactions on Circuits and Systems for Video Technology
Content-based audio classification and retrieval by support vector machines
IEEE Transactions on Neural Networks
Cross-modal correlation learning for clustering on image-audio dataset
Proceedings of the 15th international conference on Multimedia
Cross-media retrieval using query dependent search methods
Pattern Recognition
Combining location and feature information for multimedia retrieval
International Journal of Computer Applications in Technology
Hi-index | 0.00 |
Media objects of different modalities always exist jointly and they are naturally complementary of each other, either in the view of semantics or in the view of modality. In this paper, we propose a manifold learning based cross-media retrieval approach that gives solutions to the two intrinsically basic but crucial questions of media objects semantics understanding and cross-media retrieval. First, considering the semantic complementary, how can we represent the concurrent media objects and fuse the complementary information they carry to understand the integrated semantics precisely. Second, considering the modality complementary, how can we accomplish the modality bridge to establish the cross-index and facilitate the cross-media retrieval? To solve the two problems, we first construct a Multimedia Document (MMD) Semi-Semantic Graph (MMDSSG) and then adopt Multidimensional Scaling to create an MMD Semantic Space (MMDSS). Both long-term and short-term feedbacks are proposed to boost the system performance. The first one is used to refine the MMDSSG and the second one is adopted to introduce new items that are not in the training set into the MMDSS. Since all of the MMDs and their component media objects of different modalities lie in the MMDSS and they are indexed uniformly by their coordinates in the MMDSS regardless of their modalities, the semantic subspace is actually a bridge of media objects which are of different modalities and the cross-media retrieval can be easily achieved. Experiment results are encouraging and indicate that the proposed approach is effective.