Learning invariance from transformation sequences
Neural Computation
Slow feature analysis: unsupervised learning of invariances
Neural Computation
Extracting Slow Subspaces from Natural Videos Leads to Complex Cells
ICANN '01 Proceedings of the International Conference on Artificial Neural Networks
Learning Viewpoint Invariant Perceptual Representations from Cluttered Images
IEEE Transactions on Pattern Analysis and Machine Intelligence
Estimation of Non-Normalized Statistical Models by Score Matching
The Journal of Machine Learning Research
Learning invariant object recognition in the visual system with continuous transformations
Biological Cybernetics
Topographic Product Models Applied to Natural Scene Statistics
Neural Computation
A sparse generative model of v1 simple cells with intrinsic plasticity
Neural Computation
A two-layer model of natural stimuli estimated with score matching
Neural Computation
Hi-index | 0.00 |
The human visual system has the remarkable ability to largely recognize objects invariant of their position, rotation, and scale. A good interpretation of neurobiological findings involves a computational model that simulates signal processing of the visual cortex. In part, this is likely achieved step by step from early to late areas of visual perception. While several algorithms have been proposed for learning feature detectors, only few studies at hand cover the issue of biologically plausible learning of such invariance. In this study, a set of Hebbian learning rules based on calcium dynamics and homeostatic regulations of single neurons is proposed. Their performance is verified within a simple model of the primary visual cortex to learn so-called complex cells, based on a sequence of static images. As a result, the learned complex-cell responses are largely invariant to phase and position.