Commanding a humanoid to move objects in a multimodal language

  • Authors:
  • Tetsushi Oka;Kaoru Sugita;Masao Yokota

  • Affiliations:
  • Department of Mathematical Information Engineering, College of Industrial Technology, Nihon University, Narashino, Chiba, Japan 275-8575;Faculty of Information Engineering, Fukuoka Institute of Technology, Fukuoka, Japan;Faculty of Information Engineering, Fukuoka Institute of Technology, Fukuoka, Japan

  • Venue:
  • Artificial Life and Robotics
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

This article describes a study on a humanoid robot that moves objects at the request of its users. The robot understands commands in a multimodal language which combines spoken messages and two types of hand gesture. All of ten novice users directed the robot using gestures when they were asked to spontaneously direct the robot to move objects after learning the language for a short period of time. The success rate of multimodal commands was over 90%, and the users completed their tasks without trouble. They thought that gestures were preferable to, and as easy as, verbal phrases to inform the robot of action parameters such as direction, angle, step, width, and height. The results of the study show that the language is fairly easy for nonexperts to learn, and can be made more effective for directing humanoids to move objects by making the language more sophisticated and improving our gesture detector.