Learning translation invariant recognition in massively parallel networks
Volume I: Parallel architectures on PARLE: Parallel Architectures and Languages Europe
Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
Nonlinear Modeling of Scattered Multivariate Data and Its Application to Shape Change
IEEE Transactions on Pattern Analysis and Machine Intelligence
Kernel PCA and de-noising in feature spaces
Proceedings of the 1998 conference on Advances in neural information processing systems II
Self-Organizing Maps
Non-Linear Dimensionality Reduction
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Think globally, fit locally: unsupervised learning of low dimensional manifolds
The Journal of Machine Learning Research
Injecting noise for analysing the stability of ICA components
Signal Processing - Special issue on independent components analysis and beyond
Auto-associative models and generalized principal component analysis
Journal of Multivariate Analysis
Non-linear PCA: a missing data approach
Bioinformatics
Probabilistic Non-linear Principal Component Analysis with Gaussian Process Latent Variable Models
The Journal of Machine Learning Research
Principal Manifolds for Data Visualization and Dimension Reduction
Principal Manifolds for Data Visualization and Dimension Reduction
Practical Approaches to Principal Component Analysis in the Presence of Missing Values
The Journal of Machine Learning Research
Quasi-objective nonlinear principal component analysis
Neural Networks
Curvilinear component analysis: a self-organizing neural network for nonlinear mapping of data sets
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Linear principal component analysis (PCA) can be extended to a nonlinear PCA by using artificial neural networks. But the benefit of curved components requires a careful control of the model complexity. Moreover, standard techniques for model selection, including cross-validation and more generally the use of an independent test set, fail when applied to nonlinear PCA because of its inherent unsupervised characteristics. This paper presents a new approach for validating the complexity of nonlinear PCA models by using the error in missing data estimation as a criterion for model selection. It is motivated by the idea that only the model of optimal complexity is able to predict missing values with the highest accuracy. While standard test set validation usually favours over-fitted nonlinear PCA models, the proposed model validation approach correctly selects the optimal model complexity.