Matrix algorithms
Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection
ECCV '96 Proceedings of the 4th European Conference on Computer Vision-Volume I - Volume I
Laplacian Eigenmaps for dimensionality reduction and data representation
Neural Computation
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
Learning Distance Metrics with Contextual Constraints for Image Retrieval
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
Numerical Mathematics (Texts in Applied Mathematics)
Numerical Mathematics (Texts in Applied Mathematics)
Enhancing semi-supervised clustering: a feature projection perspective
Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining
A two-step framework for highly nonlinear data unfolding
Neurocomputing
Semantic Coding by Supervised Dimensionality Reduction
IEEE Transactions on Multimedia
Editorial: Special issue on advances in web intelligence
Neurocomputing
Hybrid structure for robust dimensionality reduction
Neurocomputing
Hi-index | 0.01 |
In this paper, we present a novel semi-supervised dimensionality reduction technique to address the problems of inefficient learning and costly computation in coping with high-dimensional data. Our method named the dual subspace projections (DSP) embeds high-dimensional data in an optimal low-dimensional space, which is learned with a few user-supplied constraints and the structure of input data. The method projects data into two different subspaces respectively the kernel space and the original input space. Each projection is designed to enforce one type of constraints and projections in the two subspaces interact with each other to satisfy constraints maximally and preserve the intrinsic data structure. Compared to existing techniques, our method has the following advantages: (1) it benefits from constraints even when only a few are available; (2) it is robust and free from overfitting; and (3) it handles nonlinearly separable data, but learns a linear data transformation. As a conclusion, our method can be easily generalized to new data points and is efficient in dealing with large datasets. An empirical study using real data validates our claims so that significant improvements in learning accuracy can be obtained after the DSP-based dimensionality reduction is applied to high-dimensional data.