Matrix analysis
Natural gradient works efficiently in learning
Neural Computation
On the Stability of Source Separation Algorithms
Journal of VLSI Signal Processing Systems
Adaptive Blind Signal and Image Processing: Learning Algorithms and Applications
Adaptive Blind Signal and Image Processing: Learning Algorithms and Applications
Natural Gradient Learning for Over-and Under-Complete Bases in ICA
Neural Computation
Hi-index | 0.08 |
The present work deals with the ICA problem in the overdetermined (or undercomplete) case, i.e. the sensors outnumber the sources. As appeared recently in the literature, the natural gradient (NG) approach is an efficient way to get fast convergence in ICA learning rules, also in our case of interest. However, up to the author's knowledge, the stability properties of such algorithms have been analyzed only in the complete ICA. In order to face this lack, a general framework for the stability analysis of the overdetermined NG based ICA learning rules is proposed here. In particular, it has been observed that the already existing algorithms do not have separating matrices as equilibrium points, as instead it occurs in case of an alternative standard gradient based ICA learning rule here proposed and in all related NG versions that can be derived from it. This and the property of being locally stable at the equilibrium for such new algorithms make them preferable w.r.t. the others for practical purposes.