Connectionist learning procedures
Artificial Intelligence
Learning invariance from transformation sequences
Neural Computation
Adaptive Scale Filtering: A General Method for Obtaining Shape From Texture
IEEE Transactions on Pattern Analysis and Machine Intelligence
Learning to Predict by the Methods of Temporal Differences
Machine Learning
Learning to Categorize Objects Using Temporal Coherence
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Removing time variation with the anti-hebbian differential synapse
Neural Computation
A Family of Canonical Correlation Networks
Neural Processing Letters
Slow feature analysis: unsupervised learning of invariances
Neural Computation
Receptive Fields Similar to Simple Cells Maximize Temporal Coherence in Natural Video
ICANN '02 Proceedings of the International Conference on Artificial Neural Networks
ICANN '02 Proceedings of the International Conference on Artificial Neural Networks
An Adaptive Hierarchical Model of the Ventral Visual Pathway Implemented on a Mobile Robot
BMCV '02 Proceedings of the Second International Workshop on Biologically Motivated Computer Vision
SINBAD automation of scientific discovery: From factor analysis to theory synthesis
Natural Computing: an international journal
Blind Source Separation Using Temporal Predictability
Neural Computation
A Maximum-Likelihood Interpretation for Slow Feature Analysis
Neural Computation
Information maximization in face processing
Neurocomputing
Research of blind images separation algorithm based on Kernel space
ICNC'09 Proceedings of the 5th international conference on Natural computation
Canonical correlation analysis using within-class coupling
Pattern Recognition Letters
Hi-index | 0.00 |
A model is presented for unsupervised learning of low level vision tasks, such as the extraction of surface depth. A key assumption is that perceptually salient visual parameters (e.g., surface depth) vary smoothly over time. This assumption is used to derive a learning rule that maximizes the long-term variance of each unit's outputs, whilst simultaneously minimizing its short-term variance. The length of the half-life associated with each of these variances is not critical to the success of the algorithm. The learning rule involves a linear combination of anti-Hebbian and Hebbian weight changes, over short and long time scales, respectively. This maximizes the information throughput with respect to low-frequency parameters implicit in the input sequence. The model is used to learn stereo disparity from temporal sequences of random-dot and gray-level stereograms containing synthetically generated subpixel disparities. The presence of temporal discontinuities in disparity does not prevent learning or generalization to previously unseen image sequences. The implications of this class of unsupervised methods for learning in perceptual systems are discussed.