A computational model of integration between reinforcement learning and task monitoring in the prefrontal cortex

  • Authors:
  • Mehdi Khamassi;René Quilodran;Pierre Enel;Emmanuel Procyk;Peter F. Dominey

  • Affiliations:
  • INSERM, SBRI, Bron, France;INSERM, SBRI, Bron, France;INSERM, SBRI, Bron, France;INSERM, SBRI, Bron, France;INSERM, SBRI, Bron, France

  • Venue:
  • SAB'10 Proceedings of the 11th international conference on Simulation of adaptive behavior: from animals to animats
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Taking inspiration from neural principles of decision-making is of particular interest to help improve adaptivity of artificial systems. Research at the crossroads of neuroscience and artificial intelligence in the last decade has helped understanding how the brain organizes reinforcement learning (RL) processes (the adaptation of decisions based on feedback from the environment). The current challenge is now to understand how the brain flexibly regulates parameters of RL such as the exploration rate based on the task structure, which is called meta-learning ([1]: Doya, 2002). Here, we propose a computational mechanism of exploration regulation based on real neurophysiological and behavioral data recorded in monkey prefrontal cortex during a visuo-motor task involving a clear distinction between exploratory and exploitative actions. We first fit trial-by-trial choices made by the monkeys with an analytical reinforcement learning model. We find that the model which has the highest likelihood of predicting monkeys' choices reveals different exploration rates at different task phases. In addition, the optimized model has a very high learning rate, and a reset of action values associated to a cue used in the task to signal condition changes. Beyond classical RL mechanisms, these results suggest that the monkey brain extracted task regularities to tune learning parameters in a task-appropriate way. We finally use these principles to develop a neural network model extending a previous cortico-striatal loop model. In our prefrontal cortex component, prediction error signals are extracted to produce feedback categorization signals. The latter are used to boost exploration after errors, and to attenuate it during exploitation, ensuring a lock on the currently rewarded choice. This model performs the task like monkeys, and provides a set of experimental predictions to be tested by future neurophysiological recordings.