Human-Robot Interface Based on Speech Understanding Assisted by Vision

  • Authors:
  • Shengshien Chong;Yoshinori Kuno;Nobutaka Shimada;Yoshiaki Shirai

  • Affiliations:
  • -;-;-;-

  • Venue:
  • ICMI '00 Proceedings of the Third International Conference on Advances in Multimodal Interfaces
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

Speech recognition provides a natural and familiar interface for human beings to pass on information. For this, it is likely to be used as the human interface in service robots. However, in order for the robot to move in accordance to what the user tells it, there is a need to look at information other than those obtained from speech input. First, we look at the widely discussed problem in natural language processing of abbreviated communication of common context between parties. In addition to this, another problem exists for a robot, and that is the lack of information linking symbols in a robot's world to things in a real world. Here, we propose a method of using image processing to make up for the information lacking in language processing that makes it insufficient to carry out the action. And when image processing fails, the robot will ask the user directly and use his/her answer to help it in achieving its task. We confirm our theories by performing experiments on both simulation and real robot and test their reliability.