A fast fixed-point algorithm for independent component analysis
Neural Computation
Independent component analysis by general nonlinear Hebbian-like learning rules
Signal Processing - Special issue on neural networks
Two approaches to optimal annealing
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
On-line learning in neural networks
On-line learning in neural networks
On-line learning in switching and drifting environments with application to blind source separation
On-line learning in neural networks
On-line learning with time-correlated examples
On-line learning in neural networks
Equivariant adaptive source separation
IEEE Transactions on Signal Processing
Dynamics of ICA for High-Dimensional Data
ICANN '02 Proceedings of the International Conference on Artificial Neural Networks
Statistical dynamics of on-line independent component analysis
The Journal of Machine Learning Research
Statistical dynamics of on-line independent component analysis
The Journal of Machine Learning Research
Hi-index | 0.00 |
Previous analytical studies of on-line independent component analysis (ICA) learning rules have focused on asymptotic stability and efficiency. In practice, the transient stages of learning are often more significant in determining the success of an algorithm. This is demonstrated here with an analysis of a Hebbian ICA algorithm, which can find a small number of nongaussian components given data composed of a linear mixture of independent source signals. An idealized data model is considered in which the sources comprise a number of nongaussian and gaussian sources, and a solution to the dynamics is obtained in the limit where the number of gaussian sources is infinite. Previous stability results are confirmed by expanding around optimal fixed points, where a closed-form solution to the learning dynamics is obtained. However, stochastic effects are shown to stabilize otherwise unstable suboptimal fixed points. Conditions required to destabilize one such fixed point are obtained for the case of a single nongaussian component, indicating that the initial learning rate η required to escape successfully is very low (η = O(N-2) where N is the data dimension), resulting in very slow learning typically requiring O(N3) iterations. Simulations confirm that this picture holds for a finite system.