Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Learning and relearning in Boltzmann machines
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Connectionist learning of belief networks
Artificial Intelligence
Varieties of Helmholtz machine
Neural Networks - 1996 Special issue: four major hypotheses in neuroscience
Probabilistic interpretation of population codes
Neural Computation
The effect of correlated variability on the accuracy of a population code
Neural Computation
Spikes: exploring the neural code
Spikes: exploring the neural code
Distributional population codes and multiple motion models
Proceedings of the 1998 conference on Advances in neural information processing systems II
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Combining probabilistic population codes
IJCAI'97 Proceedings of the Fifteenth international joint conference on Artifical intelligence - Volume 2
Neural Computation
Bayesian spiking neurons i: Inference
Neural Computation
Bayesian real-time perception algorithms on GPU
Journal of Real-Time Image Processing
Self-organized neural learning of statistical inference from high-dimensional data
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Hi-index | 0.00 |
It has been proposed that populations of neurons process information in terms of probability density functions (PDFs) of analog variables. Such analog variables range, for example, from target luminance and depth on the sensory interface to eye position and joint angles on the motor output side. The requirement that analog variables must be processed leads inevitably to a probabilistic description, while the limited precision and lifetime of the neuronal processing units lead naturally to a population representation of information. We show how a time-dependent probability density ρ(x; t) over variable x, residing in a specified function space of dimension D, may be decoded from the neuronal activities in a population as a linear combination of certain decoding functions φi(x), with coefficients given by the N firing rates ai(t) (generally with D N). We show how the neuronal encoding process may be described by projecting a set of complementary encoding functions φ'i(x) on the probability density ρ'(x; t), and passing the result through a rectifying nonlinear activation function. We show how both encoders φ'i(x) and decoders φi(x) may be determined by minimizing cost functions that quantify the inaccuracy of the representation. Expressing a given computation in terms of manipulation and transformation of probabilities, we show how this representation leads to a neural circuit that can carry out the required computation within a consistent Bayesian framework, with the synaptic weights being explicitly generated in terms of encoders, decoders, conditional probabilities, and priors.