Spiking Neuron Models: An Introduction
Spiking Neuron Models: An Introduction
Bayesian computation in recurrent neural circuits
Neural Computation
Information Theory, Inference & Learning Algorithms
Information Theory, Inference & Learning Algorithms
Loopy Belief Propagation: Convergence and Effects of Message Errors
The Journal of Machine Learning Research
Modeling Neuronal Assemblies: Theory and Implementation
Neural Computation
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Bayesian spiking neurons i: Inference
Neural Computation
Probabilistic Reasoning and Decision Making in Sensory-Motor Systems
Probabilistic Reasoning and Decision Making in Sensory-Motor Systems
Belief propagation in networks of spiking neurons
Neural Computation
Cortical circuitry implementing graphical models
Neural Computation
Probabilistic Graphical Models: Principles and Techniques - Adaptive Computation and Machine Learning
Reward-modulated hebbian learning of decision making
Neural Computation
Nonparametric belief propagation
CVPR'03 Proceedings of the 2003 IEEE computer society conference on Computer vision and pattern recognition
PAMPAS: real-valued graphical models for computer vision
CVPR'03 Proceedings of the 2003 IEEE computer society conference on Computer vision and pattern recognition
Loopy belief propagation for approximate inference: an empirical study
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
The generalized distributive law
IEEE Transactions on Information Theory
Factor graphs and the sum-product algorithm
IEEE Transactions on Information Theory
Codes on graphs: normal realizations
IEEE Transactions on Information Theory
Constructing free-energy approximations and generalized belief propagation algorithms
IEEE Transactions on Information Theory
Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hi-index | 0.00 |
Temporal spike codes play a crucial role in neural information processing. In particular, there is strong experimental evidence that interspike intervals ISIs are used for stimulus representation in neural systems. However, very few algorithmic principles exploit the benefits of such temporal codes for probabilistic inference of stimuli or decisions. Here, we describe and rigorously prove the functional properties of a spike-based processor that uses ISI distributions to perform probabilistic inference. The abstract processor architecture serves as a building block for more concrete, neural implementations of the belief-propagation BP algorithm in arbitrary graphical models e.g., Bayesian networks and factor graphs. The distributed nature of graphical models matches well with the architectural and functional constraints imposed by biology. In our model, ISI distributions represent the BP messages exchanged between factor nodes, leading to the interpretation of a single spike as a random sample that follows such a distribution. We verify the abstract processor model by numerical simulation in full graphs, and demonstrate that it can be applied even in the presence of analog variables. As a particular example, we also show results of a concrete, neural implementation of the processor, although in principle our approach is more flexible and allows different neurobiological interpretations. Furthermore, electrophysiological data from area LIP during behavioral experiments are assessed in light of ISI coding, leading to concrete testable, quantitative predictions and a more accurate description of these data compared to hitherto existing models.