Face and gesture recognition using subspace method for human-robot interaction

  • Authors:
  • Md. Hasanuzzaman;T. Zhang;V. Ampornaramveth;M. A. Bhuiyan;Y. Shirai;H. Ueno

  • Affiliations:
  • Intelligent System Research Division, National Institute of Informatics, Tokyo, Japan;Intelligent System Research Division, National Institute of Informatics, Tokyo, Japan;Intelligent System Research Division, National Institute of Informatics, Tokyo, Japan;Jahangirnagor University, Dhaka, Bangladesh;Department of Computer Controlled Mechanical Systems, Osaka University, Suita, Japan;Intelligent System Research Division, National Institute of Informatics, Tokyo, Japan

  • Venue:
  • PCM'04 Proceedings of the 5th Pacific Rim conference on Advances in Multimedia Information Processing - Volume Part I
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a vision-based face and gesture recognition system for human-robot interaction. By using subspace method, face and predefined hand poses are detected from the three largest skin-like regions that are segmented using YIQ color representation system. In this subspace method we consider separate eigenspaces for each class or pose. Gesture is recognized using the rule-based approach whenever the combination of three skin-like regions at a particular image frame matches with the predefined gesture. These gesture commands are sent to robot through TCP/IP network for human-robot interaction. Using subspace method pose invariant face recognition has also been addressed. The effectiveness of this method has been demonstrated by interacting with an entertainment robot named AIBO.