Incremental learning of gestures for human–robot interaction

  • Authors:
  • Shogo Okada;Yoichi Kobayashi;Satoshi Ishibashi;Toyoaki Nishida

  • Affiliations:
  • Kyoto University, Department of Intelligence Science and Technology, Graduate School of Informatics, Engineering Bld. 10, 2F Room 214 Yoshida-Honmachi, Sakyo-ku, 606-8501, Kyoto, Japan;The University of Tokyo, Department of Creative Informatics, Graduate of Information Science and Technology, Akihabara Daibiru Bldg. 13F, 1-18-3 Sotokanda, Chiyoda-ku, 101-0021, Tokyo, Japan;Kyoto University, Department of Intelligence Science and Technology, Graduate School of Informatics, Research Bldg. #5, Yoshida-Honmachi, Sakyo-ku, 606-8501, Kyoto, Japan;Kyoto University, Department of Intelligence Science and Technology, Graduate School of Informatics, Engineering Bld. 10, 2F Room 214 Yoshida-Honmachi, Sakyo-ku, 606-8501, Kyoto, Japan

  • Venue:
  • AI & Society - Special Issue: The multiple faces of Social Intelligence Design
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

For a robot to cohabit with people, it should be able to learn people’s nonverbal social behavior from experience. In this paper, we propose a novel machine learning method for recognizing gestures used in interaction and communication. Our method enables robots to learn gestures incrementally during human–robot interaction in an unsupervised manner. It allows the user to leave the number and types of gestures undefined prior to the learning. The proposed method (HB-SOINN) is based on a self-organizing incremental neural network and the hidden Markov model. We have added an interactive learning mechanism to HB-SOINN to prevent a single cluster from running into a failure as a result of polysemy of being assigned more than one meaning. For example, a sentence: “Keep on going left slowly” has three meanings such as, “Keep on (1)”, “going left (2)”, “slowly (3)”. We experimentally tested the clustering performance of the proposed method against data obtained from measuring gestures using a motion capture device. The results show that the classification performance of HB-SOINN exceeds that of conventional clustering approaches. In addition, we have found that the interactive learning function improves the learning performance of HB-SOINN.