Learning from object motion using visual saliency and speech phonemes by a humanoid robot

  • Authors:
  • Guolin Jin;Kenji Suzuki

  • Affiliations:
  • -;Graduate School of Systems and Information Engineering, University of Tsukuba

  • Venue:
  • ROBIO'09 Proceedings of the 2009 international conference on Robotics and biomimetics
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we describe a novel method of word acquisition through multimodal interaction between a humanoid robot and humans. The developed robot realizes word, actually verb, acquisition from raw multimodal sensory stimulus by seeing movement of the given objects and listening to spoken utterance by humans without symbolic representations of semantics. In addition, the robot can utter the learnt words base on its own phonemes which correspond to the categorical phonetic feature map. We consider that words bind directly to non-symbolic perceptual physical feature: such as visual features of the given object and acoustic features of given utterance, aside from symbolic representations of semantics.