Discrimination thresholds for channel-coded systems
Biological Cybernetics
Statistically efficient estimation using population coding
Neural Computation
Probabilistic interpretation of population codes
Neural Computation
Mutual information, Fisher information, and population coding
Neural Computation
Neuronal tuning: to sharpen or broaden
Neural Computation
Spikes: exploring the neural code
Spikes: exploring the neural code
Optimal short-term population coding: when fisher information fails
Neural Computation
Estimating a state-space model from point process observations
Neural Computation
Neural representation of probabilistic information
Neural Computation
Slow and Smooth: A Bayesian theory for the combination of local motion signals in human vision
Slow and Smooth: A Bayesian theory for the combination of local motion signals in human vision
Information Theory, Inference & Learning Algorithms
Information Theory, Inference & Learning Algorithms
Dynamic Analyses of Information Encoding in Neural Ensembles
Neural Computation
Construction of Robot Intra-modal and Inter-modal Coordination Skills by Developmental Learning
Journal of Intelligent and Robotic Systems
Bayesian spiking neurons i: Inference
Neural Computation
Encoding and decoding spikes for dynamic stimuli
Neural Computation
Cortical circuitry implementing graphical models
Neural Computation
Optimal population codes for space: Grid cells outperform place cells
Neural Computation
Hi-index | 0.00 |
Uncertainty coming from the noise in its neurons and the ill-posed nature of many tasks plagues neural computations. Maybe surprisingly, many studies show that the brain manipulates these forms of uncertainty in a probabilistically consistent and normative manner, and there is now a rich theoretical literature on the capabilities of populations of neurons to implement computations in the face of uncertainty. However, one major facet of uncertainty has received comparatively little attention: time. In a dynamic, rapidly changing world, data are only temporarily relevant. Here, we analyze the computational consequences of encoding stimulus trajectories in populations of neurons. For the most obvious, simple, instantaneous encoder, the correlations induced by natural, smooth stimuli engender a decoder that requires access to information that is nonlocal both in time and across neurons. This formally amounts to a ruinous representation. We show that there is an alternative encoder that is computationally and representationally powerful in which each spike contributes independent information; it is independently decodable, in other words. We suggest this as an appropriate foundation for understanding time-varying population codes. Furthermore, we show how adaptation to temporal stimulus statistics emerges directly from the demands of simple decoding.