Learning invariance from transformation sequences
Neural Computation
Elements of information theory
Elements of information theory
Document clustering using word clusters via the information bottleneck method
SIGIR '00 Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval
Slow feature analysis: unsupervised learning of invariances
Neural Computation
Information Bottleneck for Gaussian Variables
The Journal of Machine Learning Research
Learning viewpoint invariant object representations using a temporal coherence principle
Biological Cybernetics
Predictability, Complexity, and Learning
Neural Computation
Multivariate information bottleneck
Neural Computation
Unifying perception and curiosity
Unifying perception and curiosity
A spiking neuron as information bottleneck
Neural Computation
Hi-index | 0.00 |
Understanding the guiding principles of sensory coding strategies is a main goal in computational neuroscience. Among others, the principles of predictive coding and slowness appear to capture aspects of sensory processing. Predictive coding postulates that sensory systems are adapted to the structure of their input signals such that information about future inputs is encoded. Slow feature analysis (SFA) is a method for extracting slowly varying components from quickly varying input signals, thereby learning temporally invariant features. Here, we use the information bottleneck method to state an information-theoretic objective function for temporally local predictive coding. We then show that the linear case of SFA can be interpreted as a variant of predictive coding that maximizes the mutual information between the current output of the system and the input signal in the next time step. This demonstrates that the slowness principle and predictive coding are intimately related.