A multimodal language to communicate with life-supporting robots through a touch screen and a speech interface

  • Authors:
  • T. Oka;H. Matsumoto;R. Kibayashi

  • Affiliations:
  • College of Industrial Technology, Nihon University, Chiba, Japan 275-8575 and Faculty of Information Engineering, Fukuoka Institute of Technology, Fukuoka, Japan;College of Industrial Technology, Nihon University, Chiba, Japan 275-8575 and Faculty of Information Engineering, Fukuoka Institute of Technology, Fukuoka, Japan;College of Industrial Technology, Nihon University, Chiba, Japan 275-8575 and Faculty of Information Engineering, Fukuoka Institute of Technology, Fukuoka, Japan

  • Venue:
  • Artificial Life and Robotics
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

This article proposes a multimodal language to communicate with life-supporting robots through a touch screen and a speech interface. The language is designed for untrained users who need support in their daily lives from cost-effective robots. In this language, the users can combine spoken and pointing messages in an interactive manner in order to convey their intentions to the robots. Spoken messages include verb and noun phrases which describe intentions. Pointing messages are given when the user's finger touches a camera image, a picture containing a robot body, or a button on a touch screen at hand which convey a location in their environment, a direction, a body part of the robot, a cue, a reply to a query, or other information to help the robot. This work presents the philosophy and structure of the language.