Linear learning: landscapes and algorithms
Advances in neural information processing systems 1
Adaptation and decorrelation in the cortex
The computing neuron
Backpropagation and unsupervised learning in linear networks
Backpropagation
Convergence of learning algorithms with constant learning rates
IEEE Transactions on Neural Networks
An Experimental Comparison of Three PCA Neural Networks
Neural Processing Letters
A feature extraction unsupervised neural network for an environmental data set
Neural Networks - 2003 Special issue: Neural network analysis of complex scientific data: Astronomy and geosciences
Theoretical Computer Science
Global Convergence of a PCA Learning Algorithm with a Constant Learning Rate
Computers & Mathematics with Applications
A cellular neural network as a principal component analyzer
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Domain of attraction on adaptive feature extraction of nonstationary processes
ICASSP'93 Proceedings of the 1993 IEEE international conference on Acoustics, speech, and signal processing: plenary, special, audio, underwater acoustics, VLSI, neural networks - Volume I
Dynamical system for computing largest generalized eigenvalue
ISNN'06 Proceedings of the Third international conference on Advances in Neural Networks - Volume Part I
Hi-index | 0.00 |
We investigate the asymptotic behavior of a general class of on-line Principal Component Analysis (PCA) learning algorithms, focusing our attention on the analysis of two algorithms which have recently been proposed and are based on strictly local learning rules. We rigorously establish that the behavior of the algorithms is intimately related to an ordinary differential equation (ODE) which is obtained by suitably averaging over the training patterns, and study the equilibria of these ODEs and their local stability properties. Our results imply, in particular, that local PCA algorithms should always incorporate hierarchical rather than more competitive, symmetric decorrelation, for reasons of superior performance of the algorithms.