G-maximization: An unsupervised learning procedure for discovering regularities
AIP Conference Proceedings 151 on Neural Networks for Computing
An optimality principle for unsupervised learning
Advances in neural information processing systems 1
An application of the principle of maximum information preservation to linear systems
Advances in neural information processing systems 1
A Mathematical Theory of Communication
A Mathematical Theory of Communication
2005 Special Issue: Unifying cost and information in information-theoretic competitive learning
Neural Networks - 2005 Special issue: IJCNN 2005
Reduced representation by neural networks with restricted receptive fields
Neural Computation
Topographic Infomax in a Neural Multigrid
ISNN '07 Proceedings of the 4th international symposium on Neural Networks: Part II--Advances in Neural Networks
Neural network learning of optimal Kalman prediction and control
Neural Networks
Network-Related Challenges and Insights from Neuroscience
Bio-Inspired Computing and Communication
Self-enhancement learning: self-supervised and target-creating learning
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Neural learning of Kalman filtering, Kalman control, and system identification
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Structural enhanced information to detect features in competitive learning
SMC'09 Proceedings of the 2009 IEEE international conference on Systems, Man and Cybernetics
SMC'09 Proceedings of the 2009 IEEE international conference on Systems, Man and Cybernetics
Maximizing the ratio of information to its cost in information theoretic competitive learning
ICANN'05 Proceedings of the 15th international conference on Artificial neural networks: formal models and their applications - Volume Part II
ICONIP'06 Proceedings of the 13 international conference on Neural Information Processing - Volume Part I
ICONIP'06 Proceedings of the 13 international conference on Neural Information Processing - Volume Part I
Hi-index | 0.00 |
A network that develops to maximize the mutual information between its output and the signal portion of its input (which is admixed with noise) is useful for extracting salient input features, and may provide a model for aspects of biological neural network function. I describe a local synaptic Learning rule that performs stochastic gradient ascent in this information-theoretic quantity, for the case in which the input-output mapping is linear and the input signal and noise are multivariate gaussian. Feedforward connection strengths are modified by a Hebbian rule during a "learning" phase in which examples of input signal plus noise are presented to the network, and by an anti-Hebbian rule during an "unlearning" phase in which examples of noise alone are presented. Each recurrent lateral connection has two values of connection strength, one for each phase; these values are updated by an anti-Hebbian rule.