Moving object recognition in eigenspace representation: gait analysis and lip reading
Pattern Recognition Letters
Laplacian Eigenmaps for dimensionality reduction and data representation
Neural Computation
Think globally, fit locally: unsupervised learning of low dimensional manifolds
The Journal of Machine Learning Research
A kernel view of the dimensionality reduction of manifolds
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Principal Manifolds and Nonlinear Dimensionality Reduction via Tangent Space Alignment
SIAM Journal on Scientific Computing
Separating Style and Content with Bilinear Models
Neural Computation
Unsupervised Learning of Image Manifolds by Semidefinite Programming
International Journal of Computer Vision
Robust locally linear embedding
Pattern Recognition
Manifold alignment using Procrustes analysis
Proceedings of the 25th international conference on Machine learning
Scene modeling and change detection in dynamic scenes: A subspace approach
Computer Vision and Image Understanding
Hi-index | 0.02 |
We discuss the utility of dimensionality reduction algorithms to put data points in high dimensional spaces into correspondence by learning a transformation between assigned data points on a lower dimensional structure. We assume that similar high dimensional feature spaces are characterized by a similar underlying low dimensional structure. To enable the determination of an affine transformation between two data sets we make use of well-known dimensional reduction algorithms. We demonstrate this procedure for applications like classification and assignments between two given data sets and evaluate six well-known algorithms during several experiments with different objectives. We show that with these algorithms and our transformation approach high dimensional data sets can be related to each other. We also show that linear methods turn out to be more suitable for assignment tasks, whereas graph-based methods appear to be superior for classification tasks.