A neural network model for selective attention in visual pattern recognition
Biological Cybernetics
Modeling brain function—the world of attractor neural networks
Modeling brain function—the world of attractor neural networks
A Novel Feature Recognition Neural Network and its Application to Character Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Learning in neural networks with material synapses
Neural Computation
Shape quantization and recognition with randomized trees
Neural Computation
Joint Induction of Shape Features and Tree Classifiers
IEEE Transactions on Pattern Analysis and Machine Intelligence
Boosted mixture of experts: an ensemble learning scheme
Neural Computation
A computational model for visual selection
Neural Computation
Spike-Driven Synaptic Plasticity: Theory, Simulation, VLSI Implementation
Neural Computation
A Neural Network Architecture for Visual Selection
Neural Computation
Spike-Driven Synaptic Plasticity for Learning Correlated Patterns of Asynchronous Activity
ICANN '02 Proceedings of the International Conference on Artificial Neural Networks
Attractor memory with self-organizing input
BioADIT'06 Proceedings of the Second international conference on Biologically Inspired Approaches to Advanced Information Technology
Hi-index | 0.00 |
We describe a system of thousands of binary perceptrons with coarse-oriented edges as input that is able to recognize shapes, even in a context with hundreds of classes. The perceptrons have randomized feedforward connections from the input layer and form a recurrent network among themselves. Each class is represented by a prelearned attractor (serving as an associative hook) in the recurrent net corresponding to a randomly selected subpopulation of the perceptrons. In training, first the attractor of the correct class is activated among the perceptrons; then the visual stimulus is presented at the input layer. The feedforward connections are modified using field-dependent Hebbian learning with positive synapses, which we show to be stable with respect to large variations in feature statistics and coding levels and allows the use of the same threshold on all perceptrons. Recognition is based on only the visual stimuli. These activate the recurrent network, which is then driven by the dynamics to a sustained attractor state, concentrated in the correct class subset and providing a form of working memory. We believe this architecture is more transparent than standard feedforward two-layer networks and has stronger biological analogies.