Phasic dopamine as a prediction error of intrinsic and extrinsic reinforcements driving both action acquisition and reward maximization: A simulated robotic study

  • Authors:
  • Marco Mirolli;Vieri G. Santucci;Gianluca Baldassarre

  • Affiliations:
  • Istituto di Scienze e Tecnologie della Cognizione (ISTC), CNR, Via San Martino della Battaglia 44, 00185, Roma, Italy;Istituto di Scienze e Tecnologie della Cognizione (ISTC), CNR, Via San Martino della Battaglia 44, 00185, Roma, Italy and School of Computing and Mathematics, University of Plymouth, Plymouth PL4 ...;Istituto di Scienze e Tecnologie della Cognizione (ISTC), CNR, Via San Martino della Battaglia 44, 00185, Roma, Italy

  • Venue:
  • Neural Networks
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

An important issue of recent neuroscientific research is to understand the functional role of the phasic release of dopamine in the striatum, and in particular its relation to reinforcement learning. The literature is split between two alternative hypotheses: one considers phasic dopamine as a reward prediction error similar to the computational TD-error, whose function is to guide an animal to maximize future rewards; the other holds that phasic dopamine is a sensory prediction error signal that lets the animal discover and acquire novel actions. In this paper we propose an original hypothesis that integrates these two contrasting positions: according to our view phasic dopamine represents a TD-like reinforcement prediction error learning signal determined by both unexpected changes in the environment (temporary, intrinsic reinforcements) and biological rewards (permanent, extrinsic reinforcements). Accordingly, dopamine plays the functional role of driving both the discovery and acquisition of novel actions and the maximization of future rewards. To validate our hypothesis we perform a series of experiments with a simulated robotic system that has to learn different skills in order to get rewards. We compare different versions of the system in which we vary the composition of the learning signal. The results show that only the system reinforced by both extrinsic and intrinsic reinforcements is able to reach high performance in sufficiently complex conditions.