Gesture Spotting and Recognition for Human–Robot Interaction

  • Authors:
  • Hee-Deok Yang;A-Yeon Park;Seong-Whan Lee

  • Affiliations:
  • Dept. of Comput. Sci. & Eng., Korea Univ., Seoul;-;-

  • Venue:
  • IEEE Transactions on Robotics
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Visual interpretation of gestures can be useful in accomplishing natural human-robot interaction (HRI). Previous HRI research focused on issues such as hand gestures, sign language, and command gesture recognition. Automatic recognition of whole-body gestures is required in order for HRI to operate naturally. This presents a challenging problem, because describing and modeling meaningful gesture patterns from whole-body gestures is a complex task. This paper presents a new method for recognition of whole-body key gestures in HRI. A human subject is first described by a set of features, encoding the angular relationship between a dozen body parts in 3-D. A feature vector is then mapped to a codeword of hidden Markov models. In order to spot key gestures accurately, a sophisticated method of designing a transition gesture model is proposed. To reduce the states of the transition gesture model, model reduction which merges similar states based on data-dependent statistics and relative entropy is used. The experimental results demonstrate that the proposed method can be efficient and effective in HRI, for automatic recognition of whole-body key gestures from motion sequences