Grounding neural robot language in action

  • Authors:
  • Stefan Wermter;Cornelius Weber;Mark Elshaw;Vittorio Gallese;Friedemann Pulvermüller

  • Affiliations:
  • Hybrid Intelligent Systems, School of Computing and Technology, University of Sunderland, Sunderland, UK;Hybrid Intelligent Systems, School of Computing and Technology, University of Sunderland, Sunderland, UK;Hybrid Intelligent Systems, School of Computing and Technology, University of Sunderland, Sunderland, UK;Institute of Neurophysiology, University of Parma, Parma, Italy;Cognition and Brain Sciences Unit, Medical Research Council, Cambridge, UK

  • Venue:
  • Biomimetic Neural Learning for Intelligent Robots
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we describe two models for neural grounding of robotic language processing in actions. These models are inspired by concepts of the mirror neuron system in order to produce learning by imitation by combining high-level vision, language and motor command inputs. The models learn to perform and recognise three behaviours, ‘go', ‘pick' and ‘lift'. The first single-layer model uses an adapted Helmholtz machine wake-sleep algorithm to act like a Kohonen self-organising network that receives all inputs into a single layer. In contrast, the second, hierarchical model has two layers. In the lower level hidden layer the Helmholtz machine wake-sleep algorithm is used to learn the relationship between action and vision, while the upper layer uses the Kohonen self-organising approach to combine the output of the lower hidden layer and the language input. On the hidden layer of the single-layer model, the action words are represented on non-overlapping regions and any neuron in each region accounts for a corresponding sensory-motor binding. In the hierarchical model rather separate sensory- and motor representations on the lower level are bound to corresponding sensory-motor pairings via the top level that organises according to the language input.