Multilayer feedforward networks are universal approximators
Neural Networks
Connectionist learning procedures
Artificial Intelligence
Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
Nonlinear canonical correlation analysis by neural networks
Neural Networks
Robust De-noising by Kernel PCA
ICANN '02 Proceedings of the International Conference on Artificial Neural Networks
Structured sparse linear graph embedding
Neural Networks
Neural Processing Letters
Hi-index | 0.00 |
By means of mathematical analysis and numerical experimentation, this study shows that the problems of non-uniqueness of solutions and data over-fitting, that plague the multilayer feedforward neural network for NonLinear Principal Component Analysis (NLPCA), are caused by inappropriate architecture of the neural network. A simplified two-hidden-layer feedforward neural network, which has no encoding layer and no bias term in the mathematical definitions of bottleneck and output neurons, is proposed to conduct NLPCA. This new, compact NLPCA model alleviates the aforementioned problems encountered when using the more complex neural network architecture for NLPCA. The numerical experiments are based on a data set generated from a well-known nonlinear system, the Lorenz chaotic attractor. Given the same number of bottleneck neurons or reduced dimensions, the compact NLPCA model effectively characterizes and represents the Lorenz attractor with significantly fewer parameters than the relevant three-hidden-layer feedforward neural network for NLPCA.