A Model of Neuronal Specialization Using Hebbian Policy-Gradient with "Slow" Noise

  • Authors:
  • Emmanuel Daucé

  • Affiliations:
  • INRIA Lille Nord-Europe, Villeneuve d'Ascq, France and Institute of Movement Sciences, University of the Mediterranean, Marseille, France

  • Venue:
  • ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part I
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

We study a model of neuronal specialization using a policy gradient reinforcement approach. (1) The neurons stochastically fire according to their synaptic input plus a noise term; (2) The environment is a closed-loop system composed of a rotating eye and a visual punctual target; (3) The network is composed of a foveated retina, a primary layer and a motoneuron layer; (4) The reward depends on the distance between the subjective target position and the fovea and (5) the weight update depends on a Hebbian trace defined according to a policy gradient principle. In order to take into account the mismatch between neuronal and environmental integration times, we distort the firing probability with a "pink noise" term whose autocorrelation is of the order of 100 ms, so that the firing probability is overestimated (or underestimated) for about 100 ms periods. The rewards occuring meanwhile assess the "value" of those elementary shifts, and modify the firing probability accordingly. Every motoneuron being associated to a particular angular direction, we test at the end of the learning process the preferred output of the visual cells. We find that accordingly with the observed final behavior, the visual cells preferentially excite the motoneurons heading in the opposite angular direction.