Natural gradient works efficiently in learning
Neural Computation
Analysis of sparse representation and blind source separation
Neural Computation
A Variational Method for Learning Sparse and Overcomplete Representations
Neural Computation
Learning Overcomplete Representations
Neural Computation
Convolutive Blind Source Separation in the Frequency Domain Based on Sparse Representation
IEEE Transactions on Audio, Speech, and Language Processing
K-hyperline clustering learning for sparse component analysis
Signal Processing
A simple overcomplete ICA algorithm by non-orthogonal pair optimizations
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
A novel predual dictionary learning algorithm
Journal of Visual Communication and Image Representation
Hi-index | 0.00 |
Overcomplete representations have greater robustness in noise environment and also have greater flexibility in matching structure in the data. Lewicki and Sejnowski (2000) proposed an efficient extended natural gradient for learning the overcomplete basis and developed an overcomplete representation approach. However, they derived their gradient by many approximations, and their proof is very complicated. To give a stronger theoretical basis, we provide a brief and more rigorous mathematical proof for this gradient in this note. In addition, we propose a more robust constrained Lewicki-Sejnowski gradient.