On the Optimality of the Simple Bayesian Classifier under Zero-One Loss
Machine Learning - Special issue on learning with probabilistic representations
On contraction analysis for non-linear systems
Automatica (Journal of IFAC)
Hebbian synaptic plasticity: comparative and developmental aspects
The handbook of brain theory and neural networks
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Neuro-Dynamic Programming
Finite-time Analysis of the Multiarmed Bandit Problem
Machine Learning
Metalearning and neuromodulation
Neural Networks - Computational models of neuromodulation
Near-Optimal Reinforcement Learning in Polynominal Time
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Random Generation of Bayesian Networks
SBIA '02 Proceedings of the 16th Brazilian Symposium on Artificial Intelligence: Advances in Artificial Intelligence
On the Computational Power of Winner-Take-All
Neural Computation
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Learning Bayesian Networks
Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems
Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems
Computational algorithms and neuronal network models underlying decision processes
Neural Networks - 2006 Special issue: Neurobiology of decision making
Bayesian spiking neurons i: Inference
Neural Computation
Tuning Bandit Algorithms in Stochastic Environments
ALT '07 Proceedings of the 18th international conference on Algorithmic Learning Theory
Bayesian Networks and Decision Graphs
Bayesian Networks and Decision Graphs
Belief propagation in networks of spiking neurons
Neural Computation
IJCAI'99 Proceedings of the 16th international joint conference on Artificial intelligence - Volume 2
Multihypothesis sequential probability ratio tests .I. Asymptotic optimality
IEEE Transactions on Information Theory
Factor graphs and the sum-product algorithm
IEEE Transactions on Information Theory
Solving the distal reward problem with rare correlations
Neural Computation
Hi-index | 0.00 |
We introduce a framework for decision making in which the learning of decision making is reduced to its simplest and biologically most plausible form: Hebbian learning on a linear neuron. We cast our Bayesian-Hebb learning rule as reinforcement learning in which certain decisions are rewarded and prove that each synaptic weight will on average converge exponentially fast to the log-odd of receiving a reward when its pre-and postsynaptic neurons are active. In our simple architecture, a particular action is selected from the set of candidate actions by a winner-take-all operation. The global reward assigned to this action then modulates the update of each synapse. Apart from this global reward signal, our reward-modulated Bayesian Hebb rule is a pure Hebb update that depends only on the coactivation of the pre-and postsynaptic neurons, not on the weighted sum of all presynaptic inputs to the postsynaptic neuron as in the perceptron learning rule or the Rescorla-Wagner rule. This simple approach to action-selection learning requires that information about sensory inputs be presented to the Bayesian decision stage in a suitably preprocessed form resulting from other adaptive processes (acting on a larger timescale) that detect salient dependencies among input features. Hence our proposed framework for fast learning of decisions also provides interesting new hypotheses regarding neural nodes and computational goals of cortical areas that provide input to the final decision stage.