Natural gradient works efficiently in learning
Neural Computation
The Alpha-EM Algorithm: A Block Connectable Generalized Learning Tool for Neural Networks
IWANN '97 Proceedings of the International Work-Conference on Artificial and Natural Neural Networks: Biological and Artificial Computation: From Neuroscience to Technology
Equivariant adaptive source separation
IEEE Transactions on Signal Processing
The α-EM algorithm: surrogate likelihood maximization using α-logarithmic information measures
IEEE Transactions on Information Theory
Promoter Recognition for E. coli DNA Segments by Independent Component Analysis
CSB '04 Proceedings of the 2004 IEEE Computational Systems Bioinformatics Conference
Hi-index | 0.00 |
A new class of learning algorithms for independent component analysis (ICA) is presented. Starting from theoretical discussions on convex divergence, this information measure is minimized to derive new ICA algorithms. Since the convex divergence includes logarithmic information measures as special cases, the presented method comprises faster algorithms than existing logarithmic ones. Another important feature of this paper's ICA algorithm is to accept supervisory information. This ability is utilized to reduce the permutation indeterminacy which is inherent in usual ICA. By this method, the most important activation pattern can be found as the top one. The total algorithm is tested through applications to brain map distillation from functional MRI data. The derived algorithm is faster than logarithmic ones with little additional memory requirement, and can find task related brain maps successfully via conventional personal computer.