Natural gradient works efficiently in learning
Neural Computation
Independent component analysis: theory and applications
Independent component analysis: theory and applications
Equivariant nonstationary source separation
Neural Networks
Adaptive Blind Signal and Image Processing: Learning Algorithms and Applications
Adaptive Blind Signal and Image Processing: Learning Algorithms and Applications
Quasi-Geodesic Neural Learning Algorithms Over the Orthogonal Group: A Tutorial
The Journal of Machine Learning Research
Equivariant adaptive source separation
IEEE Transactions on Signal Processing
Fast and robust fixed-point algorithms for independent component analysis
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
In this paper we present a method of parameter optimization, relative trust-region learning, where the trust-region method and the relative optimization [M. Zibulevsky, Blind source separation with relative Newton method, in: Proceedings of the ICA, Nara, Japan, 2003, pp. 897-902] are jointly exploited. The relative trust-region method finds a direction and a step size with the help of a quadratic model of the objective function (as in the conventional trust-region methods) and updates parameters in a multiplicative fashion (as in the relative optimization). We apply this relative trust-region learning method to the problem of independent component analysis (ICA), which leads to the relative TR-ICA algorithm which turns out to possess the equivariant property (as in the relative gradient) and to achieve faster convergence than the relative gradient and even Newton-type algorithms. Empirical comparisons with several existing ICA algorithms demonstrate the useful behavior of the relative TR-ICA algorithm, such as the equivariant property and fast convergence.