Autonomous Agents and Multi-Agent Systems
Artificial Life and Robotics
Hi-index | 0.00 |
This article describes a study on a humanoid robot that moves objects at the request of its users. The robot understands commands in a multimodal language which combines spoken messages and two types of hand gesture. All of ten novice users directed the robot using gestures when they were asked to spontaneously direct the robot to move objects after learning the language for a short period of time. The success rate of multimodal commands was over 90%, and the users completed their tasks without trouble. They thought that gestures were preferable to, and as easy as, verbal phrases to inform the robot of action parameters such as direction, angle, step, width, and height. The results of the study show that the language is fairly easy for nonexperts to learn, and can be made more effective for directing humanoids to move objects by making the language more sophisticated and improving our gesture detector.