Multi-view discriminant analysis
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part I
A low rank structural large margin method for cross-modal ranking
Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval
Cross-media semantic representation via bi-directional learning to rank
Proceedings of the 21st ACM international conference on Multimedia
Cross-media topic mining on wikipedia
Proceedings of the 21st ACM international conference on Multimedia
Nonparametric bayesian upstream supervised multi-modal topic models
Proceedings of the 7th ACM international conference on Web search and data mining
A Multi-View Embedding Space for Modeling Internet Images, Tags, and Their Semantics
International Journal of Computer Vision
Hi-index | 0.00 |
This paper presents a general multi-view feature extraction approach that we call Generalized Multiview Analysis or GMA. GMA has all the desirable properties required for cross-view classification and retrieval: it is supervised, it allows generalization to unseen classes, it is multi-view and kernelizable, it affords an efficient eigenvalue based solution and is applicable to any domain. GMA exploits the fact that most popular supervised and unsupervised feature extraction techniques are the solution of a special form of a quadratic constrained quadratic program (QCQP), which can be solved efficiently as a generalized eigenvalue problem. GMA solves a joint, relaxed QCQP over different feature spaces to obtain a single (non)linear subspace. Intuitively, GMA is a supervised extension of Canonical Correlational Analysis (CCA), which is useful for cross-view classification and retrieval. The proposed approach is general and has the potential to replace CCA whenever classification or retrieval is the purpose and label information is available. We outperform previous approaches for textimage retrieval on Pascal and Wiki text-image data. We report state-of-the-art results for pose and lighting invariant face recognition on the MultiPIE face dataset, significantly outperforming other approaches.