Learning invariance from transformation sequences
Neural Computation
Convergent algorithm for sensory receptive field development
Neural Computation
Slow feature analysis: unsupervised learning of invariances
Neural Computation
Slow feature analysis: a theoretical analysis of optimal free responses
Neural Computation
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
What Can a Neuron Learn with Spike-Timing-Dependent Plasticity?
Neural Computation
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems
Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems
Independent Slow Feature Analysis and Nonlinear Blind Source Separation
Neural Computation
A Maximum-Likelihood Interpretation for Slow Feature Analysis
Neural Computation
A fast linear separability test by projection of positive points on subspaces
Proceedings of the 24th international conference on Machine learning
Removing time variation with the anti-hebbian differential synapse
Neural Computation
Invariant Object Recognition with Slow Feature Analysis
ICANN '08 Proceedings of the 18th international conference on Artificial Neural Networks, Part I
Competitive stdp-based spike pattern learning
Neural Computation
Isolated word recognition with the Liquid State Machine: a case study
Information Processing Letters - Special issue on applications of spiking neural networks
Handwritten digit recognition with nonlinear fisher discriminant analysis
ICANN'05 Proceedings of the 15th international conference on Artificial neural networks: formal models and their applications - Volume Part II
On the relation of slow feature analysis and laplacian eigenmaps
Neural Computation
Hi-index | 0.00 |
Neurons in the brain are able to detect and discriminate salient spatiotemporal patterns in the firing activity of presynaptic neurons. It is open how they can learn to achieve this, especially without the help of a supervisor. We show that a well-known unsupervised learning algorithm for linear neurons, slow feature analysis (SFA), is able to acquire the discrimination capability of one of the best algorithms for supervised linear discrimination learning, the Fisher linear discriminant (FLD), given suitable input statistics. We demonstrate the power of this principle by showing that it enables readout neurons from simulated cortical microcircuits to learn without any supervision to discriminate between spoken digits and to detect repeated firing patterns that are embedded into a stream of noise spike trains with the same firing statistics. Both these computer simulations and our theoretical analysis show that slow feature extraction enables neurons to extract and collect information that is spread out over a trajectory of firing states that lasts several hundred ms. In addition, it enables neurons to learn without supervision to keep track of time (relative to a stimulus onset, or the initiation of a motor response). Hence, these results elucidate how the brain could compute with trajectories of firing states rather than only with fixed point attractors. It also provides a theoretical basis for understanding recent experimental results on the emergence of view-and position-invariant classification of visual objects in inferior temporal cortex.