Updating the inverse of a matrix
SIAM Review
On-line learning and stochastic approximations
On-line learning in neural networks
Deriving Receptive Fields Using an Optimal Encoding Criterion
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Neural network learning of optimal Kalman prediction and control
Neural Networks
Self-enhancement learning: self-supervised and target-creating learning
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Neural learning of Kalman filtering, Kalman control, and system identification
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
SMC'09 Proceedings of the 2009 IEEE international conference on Systems, Man and Cybernetics
Information estimators for weighted observations
Neural Networks
Hi-index | 0.00 |
For a neural network comprising feedforward and lateral connections, a local learning rule is proposed that causes the lateral connections to learn directly the inverse of a covariance matrix. In contrast to earlier work, the rule involves just one processing pass through the lateral connections for each input presentation, and consists of a simple anti-Hebbian term. This provides an effective and simple method for online network learning algorithms that implement optimization principles, drawn from statistics or from information or control theory, for which a running estimate of the covariance matrix inverse is useful. An application to infomax learning (mutual information maximization) in the presence of input and output noise is used to illustrate the method.