A biologically inspired approach to learning multimodal commands and feedback for human-robot interaction

  • Authors:
  • Anja Austermann;Seiji Yamada

  • Affiliations:
  • The Graduate University for Advanced Studies (SOKENDAI), Tokyo, Japan;National Institute of Informatics, Tokyo, Japan

  • Venue:
  • CHI '09 Extended Abstracts on Human Factors in Computing Systems
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we describe a method to enable a robot to learn how a user gives commands and feedback to it by speech, prosody and touch. We propose a biologically inspired approach based on human associative learning. In the first stage, which corresponds to the stimulus encoding in natural learning, we use unsupervised training of HMMs to model the incoming stimuli. In the second stage, the associative learning, these models are associated with a meaning using an implementation of classical conditioning. Top-down processing is applied to take into account the context as a bias for the stimulus encoding. In an experimental study we evaluated the learning of user feedback with our learning method using special training tasks, which allow the robot to explore and provoke situated feedback from the user. In this first study, the robot learned to discriminate between positive and negative feedback with an average accuracy of 95.97%.