Natural gradient works efficiently in learning
Neural Computation
Effect of independent component analysis on multifractality of EEG during visual-motor task
Signal Processing - Neuronal coordination in the brain: A signal processing perspective
A source adaptive independent component analysis algorithm through solving the estimating equation
Expert Systems with Applications: An International Journal
Novel nonlinear signals separation of optimized entropy based on adaptive natural gradient learning
ICMLC'05 Proceedings of the 4th international conference on Advances in Machine Learning and Cybernetics
Hi-index | 35.69 |
Blind source separation is the problem of extracting independent signals from their mixtures without knowing the mixing coefficients nor the probability distributions of source signals and may be applied to EEG and MEG imaging of the brain. It is already known that certain algorithms work well for the extraction of independent components. The present paper is concerned with superefficiency of these based on the statistical and dynamical analysis. In a statistical estimation using t examples, the covariance of any two extracted independent signals converges to 0 of the order of 1/t. On-line dynamics shows that the covariance is of the order of η when the learning rate η is fixed to a small constant. In contrast with the above general properties, a surprising superefficiency holds in blind source separation under certain conditions where superefficiency implies that covariance decreases in the order of 1/t2 or of η2 . The paper uses the natural gradient learning algorithm and method of estimating functions to obtain superefficient procedures for both batch estimation and on-line learning. A standardized estimating function is introduced to this end. Superefficiency does not imply that the error variances of the extracted signals decrease in the order of 1/t2 or η2 but implies that their covariances (and independencies) do