Proceedings of the seventh international conference (1990) on Machine learning
A possibility for implementing curiosity and boredom in model-building neural controllers
Proceedings of the first international conference on simulation of adaptive behavior on From animals to animats
Exploration bonuses and dual control
Machine Learning
Between MDPs and semi-MDPs: a framework for temporal abstraction in reinforcement learning
Artificial Intelligence
Dopamine-dependent plasticity of corticostriatal synapses
Neural Networks - Computational models of neuromodulation
TD Models of reward predictive responses in dopamine neurons
Neural Networks - Computational models of neuromodulation
Actor-critic models of the basal ganglia: new anatomical and computational perspectives
Neural Networks - Computational models of neuromodulation
Dopamine: generalization and bonuses
Neural Networks - Computational models of neuromodulation
Learning to Predict by the Methods of Temporal Differences
Machine Learning
Policy Invariance Under Reward Transformations: Theory and Application to Reward Shaping
ICML '99 Proceedings of the Sixteenth International Conference on Machine Learning
Actor-Critic Models of Reinforcement Learning in the Basal Ganglia: From Natural to Artificial Rats
Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
Evolution and learning in an intrinsically motivated reinforcement learning robot
ECAL'07 Proceedings of the 9th European conference on Advances in artificial life
Intrinsically Motivated Reinforcement Learning: An Evolutionary Perspective
IEEE Transactions on Autonomous Mental Development
Intrinsically Motivated Learning in Natural and Artificial Systems
Intrinsically Motivated Learning in Natural and Artificial Systems
Hi-index | 0.00 |
An important issue of recent neuroscientific research is to understand the functional role of the phasic release of dopamine in the striatum, and in particular its relation to reinforcement learning. The literature is split between two alternative hypotheses: one considers phasic dopamine as a reward prediction error similar to the computational TD-error, whose function is to guide an animal to maximize future rewards; the other holds that phasic dopamine is a sensory prediction error signal that lets the animal discover and acquire novel actions. In this paper we propose an original hypothesis that integrates these two contrasting positions: according to our view phasic dopamine represents a TD-like reinforcement prediction error learning signal determined by both unexpected changes in the environment (temporary, intrinsic reinforcements) and biological rewards (permanent, extrinsic reinforcements). Accordingly, dopamine plays the functional role of driving both the discovery and acquisition of novel actions and the maximization of future rewards. To validate our hypothesis we perform a series of experiments with a simulated robotic system that has to learn different skills in order to get rewards. We compare different versions of the system in which we vary the composition of the learning signal. The results show that only the system reinforced by both extrinsic and intrinsic reinforcements is able to reach high performance in sufficiently complex conditions.