Principal component neural networks: theory and applications
Principal component neural networks: theory and applications
Non-linear dimensionality reduction techniques for classification and visualization
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
Laplacian Eigenmaps for dimensionality reduction and data representation
Neural Computation
Think globally, fit locally: unsupervised learning of low dimensional manifolds
The Journal of Machine Learning Research
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
GPCA: an efficient dimension reduction scheme for image compression and retrieval
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Differential Structure in non-Linear Image Embedding Functions
CVPRW '04 Proceedings of the 2004 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW'04) Volume 1 - Volume 01
Principal Manifolds and Nonlinear Dimensionality Reduction via Tangent Space Alignment
SIAM Journal on Scientific Computing
Unsupervised Learning of Image Manifolds by Semidefinite Programming
International Journal of Computer Vision
Towards a unified approach to document similarity search using manifold-ranking of blocks
Information Processing and Management: an International Journal
An introduction to nonlinear dimensionality reduction by maximum variance unfolding
AAAI'06 proceedings of the 21st national conference on Artificial intelligence - Volume 2
Unsupervised learning of image manifolds by semidefinite programming
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
A two-step framework for highly nonlinear data unfolding
Neurocomputing
Automatic configuration of spectral dimensionality reduction methods
Pattern Recognition Letters
Hi-index | 0.01 |
Nonlinear dimensionality reduction is a challenging problem encountered in a variety of high dimensional data analysis, including machine learning, pattern recognition, scientific visualization, and neural computation. Based on the different geometric intuitions of manifolds, maximum variance unfolding (MVU) and Laplacian eigenmaps are designed for detecting the different aspects of dataset. In this paper, combining the ideas of MVU and Laplacian eigenmaps, we propose a new nonlinear dimensionality reduction method called distinguishing variance embedding (DVE). DVE unfolds the dataset by maximizing the global variance subject to the proximity relation preservation constraint originated in Laplacian eigenmaps. We illustrate the algorithm on easily visualized examples of curves and surfaces, as well as on the actual images of rotating objects, faces, and handwritten digits.