Connectionist learning procedures
Artificial Intelligence
Learning invariance from transformation sequences
Neural Computation
A view of the EM algorithm that justifies incremental, sparse, and other variants
Proceedings of the NATO Advanced Study Institute on Learning in graphical models
A unifying review of linear Gaussian models
Neural Computation
Neural Computation
An Introduction to Variational Methods for Graphical Models
Machine Learning
Learning nonlinear dynamical systems using an EM algorithm
Proceedings of the 1998 conference on Advances in neural information processing systems II
Slow feature analysis: unsupervised learning of invariances
Neural Computation
Training products of experts by minimizing contrastive divergence
Neural Computation
Learning the nonlinearity of Neurons from natural visual stimuli
Neural Computation
The Journal of Machine Learning Research
Topographic Product Models Applied to Natural Scene Statistics
Neural Computation
Bilinear Sparse Coding for Invariant Vision
Neural Computation
Separating Style and Content with Bilinear Models
Neural Computation
A bilinear model for consistent topographic representations
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part III
Hi-index | 0.00 |
The brain extracts useful features from a maelstrom of sensory information, and a fundamental goal of theoretical neuroscience is to work out how it does so. One proposed feature extraction strategy is motivated by the observation that the meaning of sensory data, such as the identity of a moving visual object, is often more persistent than the activation of any single sensory receptor. This notion is embodied in the slow feature analysis (SFA) algorithm, which uses “slowness” as a heuristic by which to extract semantic information from multidimensional time series. Here, we develop a probabilistic interpretation of this algorithm, showing that inference and learning in the limiting case of a suitable probabilistic model yield exactly the results of SFA. Similar equivalences have proved useful in interpreting and extending comparable algorithms such as independent component analysis. For SFA, we use the equivalent probabilistic model as a conceptual springboard with which to motivate several novel extensions to the algorithm.