A computational neural model of goal-directed utterance selection

  • Authors:
  • Michael Klein;Hans Kamp;Guenther Palm;Kenji Doya

  • Affiliations:
  • Centre for Language and Speech Technology, Radboud University of Nijmegen, Postbus 9103, 6500 HD Nijmegen, The Netherlands;Institute for Natural Language Processing, University of Stuttgart, Germany;Institute for Neural Information Processing, University of Ulm, Germany;Okinawa Institute of Science and Technology, Neural Computation Unit, Japan

  • Venue:
  • Neural Networks
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

It is generally agreed that much of human communication is motivated by extra-linguistic goals: we often make utterances in order to get others to do something, or to make them support our cause, or adopt our point of view, etc. However, thus far a computational foundation for this view on language use has been lacking. In this paper we propose such a foundation using Markov Decision Processes. We borrow computational components from the field of action selection and motor control, where a neurobiological basis of these components has been established. In particular, we make use of internal models (i.e., next-state transition functions defined on current state action pairs). The internal model is coupled with reinforcement learning of a value function that is used to assess the desirability of any state that utterances (as well as certain non-verbal actions) can bring about. This cognitive architecture is tested in a number of multi-agent game simulations. In these computational experiments an agent learns to predict the context-dependent effects of utterances by interacting with other agents that are already competent speakers. We show that the cognitive architecture can account for acquiring the capability of deciding when to speak in order to achieve a certain goal (instead of performing a non-verbal action or simply doing nothing), whom to address and what to say.