Independent component analysis, a new concept?
Signal Processing - Special issue on higher order statistics
A neural net for blind separation of nonstationary signals
Neural Networks
Natural gradient works efficiently in learning
Neural Computation
Flexible Independent Component Analysis
Journal of VLSI Signal Processing Systems
Nonholonomic Orthogonal Learning Algorithms for Blind Source Separation
Neural Computation
ICASSP '99 Proceedings of the Acoustics, Speech, and Signal Processing, 1999. on 1999 IEEE International Conference - Volume 02
A blind source separation technique using second-order statistics
IEEE Transactions on Signal Processing
Equivariant adaptive source separation
IEEE Transactions on Signal Processing
Second Order Nonstationary Source Separation
Journal of VLSI Signal Processing Systems
Adaptive Differential Decorrelation: A Natural Gradient Algorithm
ICANN '02 Proceedings of the International Conference on Artificial Neural Networks
Online SOS-based multichannel blind equalization algorithm with noise
Signal Processing
Sequential Blind Signal Extraction with the Linear Predictor
ISNN '07 Proceedings of the 4th international symposium on Neural Networks: Advances in Neural Networks, Part III
Blind separation of piecewise stationary non-Gaussian sources
Signal Processing
Blind source separation based on cumulants with time and frequency non-properties
IEEE Transactions on Audio, Speech, and Language Processing
Maximum likelihood blind image separation using nonsymmetrical half-plane Markov random fields
IEEE Transactions on Image Processing
Hi-index | 0.00 |
Most of source separation methods focus on stationary sources, so higher-order statistics is necessary for successful separation, unless sources are temporally correlated. For nonstationary sources, however, it was shown [Neural Networks 8 (1995) 411] that source separation could be achieved by second-order decorrelation. In this paper, we consider the cost function proposed by Matsuoka et al. [Neural Networks 8 (1995) 411] and derive natural gradient learning algorithms for both fully connected recurrent network and feedforward network. Since our algorithms employ the natural gradient method, they possess the equivariant property and find a steepest descent direction unlike the algorithm [Neural Networks 8 (1995) 411]. We also show that our algorithms are always locally stable, regardless of probability distributions of nonstationary sources.