The nature of statistical learning theory
The nature of statistical learning theory
Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
Learning and Design of Principal Curves
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Unified Model for Probabilistic Principal Surfaces
IEEE Transactions on Pattern Analysis and Machine Intelligence
Another look at principal curves and surfaces
Journal of Multivariate Analysis
Piecewise Linear Skeletonization Using Principal Curves
IEEE Transactions on Pattern Analysis and Machine Intelligence
Self-Organizing Maps
Laplacian Eigenmaps for dimensionality reduction and data representation
Neural Computation
Regularized principal manifolds
The Journal of Machine Learning Research
Think globally, fit locally: unsupervised learning of low dimensional manifolds
The Journal of Machine Learning Research
Learning the Kernel Matrix with Semidefinite Programming
The Journal of Machine Learning Research
Principal Manifolds and Nonlinear Dimensionality Reduction via Tangent Space Alignment
SIAM Journal on Scientific Computing
Diffusion Kernels on Statistical Manifolds
The Journal of Machine Learning Research
Principal Surfaces from Unsupervised Kernel Regression
IEEE Transactions on Pattern Analysis and Machine Intelligence
Statistics and Computing
Learning Eigenfunctions Links Spectral Embedding and Kernel PCA
Neural Computation
Semi-supervised nonlinear dimensionality reduction
ICML '06 Proceedings of the 23rd international conference on Machine learning
Data Fusion and Multicue Data Matching by Diffusion Maps
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Principal curves with bounded turn
IEEE Transactions on Information Theory
Curvilinear component analysis: a self-organizing neural network for nonlinear mapping of data sets
IEEE Transactions on Neural Networks
Input space versus feature space in kernel-based methods
IEEE Transactions on Neural Networks
Optimizing the kernel in the empirical feature space
IEEE Transactions on Neural Networks
Contour-based shape representation using principal curves
Pattern Recognition
Hi-index | 0.00 |
This paper discusses the problem of what kind of learning model is suitable for the tasks of feature extraction for data representation and suggests two evaluation criteria for nonlinear feature extractors: reconstruction error minimization and similarity preservation. Based on the suggested evaluation criteria, a new type of principal curve--similarity preserving principal curve (SPPC) is proposed. SPPCs minimize the reconstruction error under the condition that the similarity between similar samples are preserved in the extracted features, thus giving researchers effective and reliable cognition of the inner structure of data sets. The existence and properties of SPPCs are analyzed; a practical learning algorithm is proposed and high dimensional extensions of SPPCs are also discussed. Experimental results show the virtues of SPPCs in preserving inner structures of data sets and discovering manifolds with high nonlinearity.