Connectionist learning procedures
Artificial Intelligence
Learning invariance from transformation sequences
Neural Computation
Slow feature analysis: unsupervised learning of invariances
Neural Computation
ICANN '02 Proceedings of the International Conference on Artificial Neural Networks
Invariant Object Recognition with Slow Feature Analysis
ICANN '08 Proceedings of the 18th international conference on Artificial Neural Networks, Part I
Biologically inspired posture recognition and posture change detection for humanoid robots
ROBIO'09 Proceedings of the 2009 international conference on Robotics and biomimetics
Regularized sparse Kernel slow feature analysis
ECML PKDD'11 Proceedings of the 2011 European conference on Machine learning and knowledge discovery in databases - Volume Part I
Invariant object recognition and pose estimation with slow feature analysis
Neural Computation
On the relation of slow feature analysis and laplacian eigenmaps
Neural Computation
Construction of approximation spaces for reinforcement learning
The Journal of Machine Learning Research
Hi-index | 0.01 |
Temporal slowness is a learning principle that allows learning of invariant representations by extracting slowly varying features from quickly varying input signals. Slow feature analysis (SFA) is an efficient algorithm based on this principle and has been applied to the learning of translation, scale, and other invariances in a simple model of the visual system. Here, a theoretical analysis of the optimization problem solved by SFA is presented, which provides a deeper understanding of the simulation results obtained in previous studies.