Multilayer feedforward networks are universal approximators
Neural Networks
An application of the principle of maximum information preservation to linear systems
Advances in neural information processing systems 1
Independent component analysis, a new concept?
Signal Processing - Special issue on higher order statistics
A fast fixed-point algorithm for independent component analysis
Neural Computation
Natural gradient works efficiently in learning
Neural Computation
Learning in Neural Networks: Theoretical Foundations
Learning in Neural Networks: Theoretical Foundations
Equivariant adaptive source separation
IEEE Transactions on Signal Processing
Fast and robust fixed-point algorithms for independent component analysis
IEEE Transactions on Neural Networks
Hi-index | 0.08 |
In blind source separation (BSS), two different separation techniques are mainly used: minimal mutual information (MMI), where minimization of the mutual output information yields an independent random vector, and maximum entropy (ME), where the output entropy is maximized. However, it is yet unclear why ME should solve the separation problem, i.e. result in an independent vector. Yang and Amari have given a partial confirmation for ME in the linear case in [18], where they prove that under the assumption of vanishing expectation of the sources ME does not change the solutions of MMI except for scaling and permutation. In this paper, we generalize Yang and Amari's approach to nonlinear BSS problems, where random vectors are mixed by output functions of layered neural networks. We show that certain solution points of MMI are kept fixed by ME if no scaling in all layers is allowed. In general, ME, however, might also change the scaling in the non-output network layers, hence, leaving the MMI solution points. Therefore, we conclude this paper by suggesting that in nonlinear ME algorithms, the norm of all weight matrix rows of each non-output layer should be kept fixed in later epochs during network training.