SIAM Journal on Applied Mathematics
A fast fixed-point algorithm for independent component analysis
Neural Computation
Independent component analysis by general nonlinear Hebbian-like learning rules
Signal Processing - Special issue on neural networks
Natural gradient works efficiently in learning
Neural Computation
On-line learning in neural networks
On-line learning in neural networks
The Geometry of Algorithms with Orthogonality Constraints
SIAM Journal on Matrix Analysis and Applications
Statistical Mechanics of Learning
Statistical Mechanics of Learning
Contravariant adaptation on structured matrix spaces
Signal Processing
Natural Gradient Learning for Over-and Under-Complete Bases in ICA
Neural Computation
Equivariant adaptive source separation
IEEE Transactions on Signal Processing
Hi-index | 0.00 |
The learning dynamics of on-line independent component analysis is analysed in the limit of large data dimension. We study a simple Hebbian learning algorithm that can be used to separate out a small number of non-Gaussian components from a high-dimensional data set. The de-mixing matrix parameters are confined to a Stiefel manifold of tall, orthogonal matrices and we introduce a natural gradient variant of the algorithm which is appropriate to learning on this manifold. For large input dimension the parameter trajectory of both algorithms passes through a sequence of unstable fixed points, each described by a diffusion process in a polynomial potential. Choosing the learning rate too large increases the escape time from each of these fixed points, effectively trapping the learning in a sub-optimal state. In order to avoid these trapping states a very low learning rate must be chosen during the learning transient, resulting in learning time-scales of O(N2) or O(N3) iterations where N is the data dimension. Escape from each sub-optimal state results in a sequence of symmetry breaking events as the algorithm learns each source in turn. This is in marked contrast to the learning dynamics displayed by related on-line learning algorithms for multilayer neural networks and principal component analysis. Although the natural gradient variant of the algorithm has nice asymptotic convergence properties, it has an equivalent transient dynamics to the standard Hebbian algorithm.