Introduction to the theory of neural computation
Introduction to the theory of neural computation
Learning invariance from transformation sequences
Neural Computation
Slow feature analysis: unsupervised learning of invariances
Neural Computation
Learning Viewpoint Invariant Perceptual Representations from Cluttered Images
IEEE Transactions on Pattern Analysis and Machine Intelligence
Learning invariant object recognition in the visual system with continuous transformations
Biological Cybernetics
Minimizing Binding Errors Using Learned Conjunctive Features
Neural Computation
Everything is alive: towards the future wisdom Web of things
World Wide Web
Hi-index | 0.00 |
Over successive stages, the visual system develops neurons that respond with view, size and position invariance to objects or faces. A number of computational models have been developed to explain how transform-invariant cells could develop in the visual system. However, a major limitation of computer modelling studies to date has been that the visual stimuli are typically presented one at a time to the network during training. In this paper, we investigate how vision models may self-organize when multiple stimuli are presented together within each visual image during training. We show that as the number of independent stimuli grows large enough, standard competitive neural networks can suddenly switch from learning representations of the multi-stimulus input patterns to representing the individual stimuli. Furthermore, the competitive networks can learn transform (e.g. position or view) invariant representations of the individual stimuli if the network is presented with input patterns containing multiple transforming stimuli during training. Finally, we extend these results to a multi-layer hierarchical network model (VisNet) of the ventral visual system. The network is trained on input images containing multiple rotating 3D objects. We show that the network is able to develop view-invariant representations of the individual objects.