Learning invariance from transformation sequences
Neural Computation
Optimal, unsupervised learning in invariant object recognition
Neural Computation
Computational and psychophysical mechanisms of visual coding
Transform-invariant recognition by association in a recurrent network
Neural Computation
On decoding the responses of a population of neurons from short time windows
Neural Computation
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Learning to Predict by the Methods of Temporal Differences
Machine Learning
An Adaptive Hierarchical Model of the Ventral Visual Pathway Implemented on a Mobile Robot
BMCV '02 Proceedings of the Second International Workshop on Biologically Motivated Computer Vision
Learning Viewpoint Invariant Perceptual Representations from Cluttered Images
IEEE Transactions on Pattern Analysis and Machine Intelligence
How to Compare Two Quantities? A Computational Model of Flutter Discrimination
Journal of Cognitive Neuroscience
Visual Recognition and Inference Using Dynamic Overcomplete Sparse Learning
Neural Computation
Multimodal feedforward self-organizing maps
CIS'05 Proceedings of the 2005 international conference on Computational Intelligence and Security - Volume Part I
Hi-index | 0.00 |
VisNet2 is a model to investigate some aspects of invariant visual object recognition in the primate visual system. It is a four-layer feedforward network with convergence to each part of a layer from a small region of the preceding layer, with competition between the neurons within a layer and with a trace learning rule to help it learn transform invariance. The trace rule is a modified Hebbian rule, which modifies synaptic weights according to both the current firing rates and the firing rates to recently seen stimuli. This enables neurons to learn to respond similarly to the gradually transforming inputs it receives, which over the short term are likely to be about the same object, given the statistics of normal visual inputs. First, we introduce for VisNet2 both single-neuron and multiple-neuron information-theoretic measures of its ability to respond to transformed stimuli. Second, using these measures, we show that quantitatively resetting the trace between stimuli is not necessary for good performance. Third, it is shown that the sigmoid activation functions used in VisNet2, which allow the sparseness of the representation to be controlled, allow good performance when using sparse distributed representations. Fourth, it is shown that VisNet2 operates well with medium-range lateral inhibition with a radius in the same order of size as the region of the preceding layer from which neurons receive inputs. Fifth, in an investigation of different learning rules for learning transform invariance, it is shown that VisNet2 operates better with a trace rule that incorporates in the trace only activity from the preceding presentations of a given stimulus, with no contribution to the trace from the current presentation, and that this is related to temporal difference learning.