Adaptive algorithms and stochastic approximations
Adaptive algorithms and stochastic approximations
Independent component analysis, a new concept?
Signal Processing - Special issue on higher order statistics
A fast fixed-point algorithm for independent component analysis
Neural Computation
Independent component analysis by general nonlinear Hebbian-like learning rules
Signal Processing - Special issue on neural networks
Applications of Neural Blind Separation to Signal and Image Processing
ICASSP '97 Proceedings of the 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '97) -Volume 1 - Volume 1
Convergence Analysis of Recurrent Neural Networks (Network Theory and Applications, V. 13)
Convergence Analysis of Recurrent Neural Networks (Network Theory and Applications, V. 13)
Adaptive algorithms for first principal eigenvector computation
Neural Networks
A Note on Stone's Conjecture of Blind Signal Separation
Neural Computation
Theoretical Computer Science
EURASIP Journal on Applied Signal Processing
Comparative speed analysis of fastICA
ICA'07 Proceedings of the 7th international conference on Independent component analysis and signal separation
Global convergence of FastICA: theoretical analysis and practical considerations
ICNC'05 Proceedings of the First international conference on Advances in Natural Computation - Volume Part I
Subspace-Based Channel Shortening for the Blind Separation of Convolutive Mixtures
IEEE Transactions on Signal Processing
Consistent independent component analysis and prewhitening
IEEE Transactions on Signal Processing - Part I
Testing for stochastic independence: application to blind source separation
IEEE Transactions on Signal Processing
Temporal BYY learning for state space approach, hidden Markovmodel, and blind source separation
IEEE Transactions on Signal Processing
Optimal Pairwise Fourth-Order Independent Component Analysis
IEEE Transactions on Signal Processing
Fourth-Order Cumulant-Based Blind Identification of Underdetermined Mixtures
IEEE Transactions on Signal Processing
IEEE Transactions on Signal Processing
IEEE Transactions on Signal Processing - Part II
A fourth-order stationarity and ergodicity conditions for harmonic processes
IEEE Transactions on Signal Processing
Independent component analysis and (simultaneous) third-ordertensor diagonalization
IEEE Transactions on Signal Processing
Performance analysis of adaptive eigenanalysis algorithms
IEEE Transactions on Signal Processing
ICA in signals with multiplicative noise
IEEE Transactions on Signal Processing - Part I
Nested Newton's method for ICA and post factor analysis
IEEE Transactions on Signal Processing
Single Evoked Somatosensory MEG Responses Extracted by Time Delayed Decorrelation
IEEE Transactions on Signal Processing
IEEE Transactions on Signal Processing
A class of neural networks for independent component analysis
IEEE Transactions on Neural Networks
On the discrete-time dynamics of the basic Hebbian neural network node
IEEE Transactions on Neural Networks
Convergence analysis of a deterministic discrete time system of Oja's PCA learning algorithm
IEEE Transactions on Neural Networks
Global convergence analysis of a discrete time nonnegative ICA algorithm
IEEE Transactions on Neural Networks
The FastICA Algorithm Revisited: Convergence Analysis
IEEE Transactions on Neural Networks
Global Convergence of GHA Learning Algorithm With Nonzero-Approaching Adaptive Learning Rates
IEEE Transactions on Neural Networks
Stability and chaos analysis for an ICA algorithm
Computers & Mathematics with Applications
Multistability of α-divergence based NMF algorithms
Computers & Mathematics with Applications
Hi-index | 35.68 |
The convergence of a class of Hyvärinen-Oja's independent component analysis (ICA) learning algorithms with constant learning rates is investigated by analyzing the original stochastic discrete time (SDT) algorithms and the corresponding deterministic discrete time (DDT) algorithms. Most existing learning rates for ICA learning algorithms are required to approach zero as the learning step increases. However, this is not a reasonable requirement to impose in many practical applications. Constant learning rates overcome the shortcoming. Onthe other hand, the original algorithms, described by the SDT algorithms, are studied directly. Invariant sets of these algorithms are obtained so that the nondivergence of the algorithms is guaranteed in stochastic environment. In the invariant sets, the local convergence ofthe original algorithms is analyzed by indirectly studying the convergence of the corresponding DDT algorithms. It is rigorously proven that the trajectories of the DDT algorithms starting from the invariant sets will converge to an independent component direction with a positive kurtosis or a negative kurtosis. The convergence results can shed some light on the dynamical behaviors of the original SDT algorithms. Furthermore, the corresponding DDT algorithms are extended to the block versions of the original SDT algorithms. The block algorithms not only establish a relationship between the SDT algorithms and the corresponding DDT algorithms, but also can get a good convergence speed and accuracy in practice. Simulation examples are carried out to illustrate the theory derived.