Alpha-nets: a recurrent “neural” network architecture with a hidden Markov model interpretation
Speech Communication - Neurospeech
Learning and relearning in Boltzmann machines
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Distributed representation and analysis of visual motion
Distributed representation and analysis of visual motion
Neural Computation
Perception as Bayesian inference
Perception as Bayesian inference
Varieties of Helmholtz machine
Neural Networks - 1996 Special issue: four major hypotheses in neuroscience
Statistically efficient estimation using population coding
Neural Computation
Probabilistic interpretation of population codes
Neural Computation
A view of the EM algorithm that justifies incremental, sparse, and other variants
Proceedings of the NATO Advanced Study Institute on Learning in graphical models
Neural mechanisms of selection and control of visually guided eye movements
Neural Networks - Special issue on neural control and robotics: biology and technology
Distributional population codes and multiple motion models
Proceedings of the 1998 conference on Advances in neural information processing systems II
Unifying Perspectives on Neuronal Codes and Processing
ICANN 96 Proceedings of the 1996 International Conference on Artificial Neural Networks
Sequential Bayesian decoding with a population of neurons
Neural Computation
Neural Engineering (Computational Neuroscience Series): Computational, Representation, and Dynamics in Neurobiological Systems
Integration of form and motion within a generative model of visual cortex
Neural Networks - 2004 Special issue Vision and brain
Fluctuation-Dissipation Theorem and Models of Learning
Neural Computation
Computing with Continuous Attractors: Stability and Online Aspects
Neural Computation
Efficient Computation Based on Stochastic Spikes
Neural Computation
Exact Inferences in a Neural Implementation of a Hidden Markov Model
Neural Computation
Bayesian spiking neurons i: Inference
Neural Computation
Implementing Bayes' Rule with Neural Fields
ICANN '08 Proceedings of the 18th international conference on Artificial Neural Networks, Part II
Motion detection and object tracking with discrete leaky integrate-and-fire neurons
Applied Intelligence
Neural network learning of optimal Kalman prediction and control
Neural Networks
State-dependent computation using coupled recurrent networks
Neural Computation
Global Expectation-Violation as Fitness Function in Evolutionary Composition
EvoWorkshops '09 Proceedings of the EvoWorkshops 2009 on Applications of Evolutionary Computing: EvoCOMNET, EvoENVIRONMENT, EvoFIN, EvoGAMES, EvoHOT, EvoIASP, EvoINTERACTION, EvoMUSART, EvoNUM, EvoSTOC, EvoTRANSLOG
Belief propagation in networks of spiking neurons
Neural Computation
Cortical circuitry implementing graphical models
Neural Computation
A motor learning neural model based on Bayesian network and reinforcement learning
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Neural learning of Kalman filtering, Kalman control, and system identification
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Implementing belief propagation in neural circuits
Neurocomputing
Modeling stereopsis via markov random field
Neural Computation
A biologically realizable bayesian computation in a cortical neural network
ICANN'12 Proceedings of the 22nd international conference on Artificial Neural Networks and Machine Learning - Volume Part I
Hi-index | 0.00 |
A large number of human psychophysical results have been successfully explained in recent years using Bayesian models. However, the neural implementation of such models remains largely unclear. In this article, we show that a network architecture commonly used to model the cerebral cortex can implement Bayesian inference for an arbitrary hidden Markov model. We illustrate the approach using an orientation discrimination task and a visual motion detection task. In the case of orientation discrimination, we show that the model network can infer the posterior distribution over orientations and correctly estimate stimulus orientation in the presence of significant noise. In the case of motion detection, we show that the resulting model network exhibits direction selectivity and correctly computes the posterior probabilities over motion direction and position. When used to solve the well-known random dots motion discrimination task, the model generates responses that mimic the activities of evidence-accumulating neurons in cortical areas LIP and FEF. The framework we introduce posits a new interpretation of cortical activities in terms of log posterior probabilities of stimuli occurring in the natural world.