Introduction to the theory of neural computation
Introduction to the theory of neural computation
Learning invariance from transformation sequences
Neural Computation
Generalization to Novel Views: Universal, Class-based, andModel-based Processing
International Journal of Computer Vision
On Intelligence
Attractor Networks for Shape Recognition
Neural Computation
Neural Computation
Associative memory with uncorrelated inputs
Neural Computation
Bayesian self-organization driven by prior probability distributions
Neural Computation
A categorizing associative memory using an adaptive classifier and sparse coding
IEEE Transactions on Neural Networks
Modeling visual information processing in brain: a computer vision point of view and approach
BVAI'07 Proceedings of the 2nd international conference on Advances in brain, vision and artificial intelligence
Hi-index | 0.00 |
We propose a neural network based autoassociative memory system for unsupervised learning. This system is intended to be an example of how a general information processing architecture, similar to that of neocortex, could be organized. The neural network has its units arranged into two separate groups called populations, one input and one hidden population. The units in the input population form receptive fields that sparsely projects onto the units of the hidden population. Competitive learning is used to train these forward projections. The hidden population implements an attractor memory. A back projection from the hidden to the input population is trained with a Hebbian learning rule. This system is capable of processing correlated and densely coded patterns, which regular attractor neural networks are very poor at. The system shows good performance on a number of typical attractor neural network tasks such as pattern completion, noise reduction, and prototype extraction.