Simultaneous gesture segmentation and recognition based on forward spotting accumulative HMMs

  • Authors:
  • Daehwan Kim;Jinyoung Song;Daijin Kim

  • Affiliations:
  • Department of Computer Science and Engineering, Pohang University of Science and Technology, San 31, Hyoja-Dong, Nam-Gu, Pohang 790784, Republic of Korea;Department of Computer Science and Engineering, Pohang University of Science and Technology, San 31, Hyoja-Dong, Nam-Gu, Pohang 790784, Republic of Korea;Department of Computer Science and Engineering, Pohang University of Science and Technology, San 31, Hyoja-Dong, Nam-Gu, Pohang 790784, Republic of Korea

  • Venue:
  • Pattern Recognition
  • Year:
  • 2007

Quantified Score

Hi-index 0.01

Visualization

Abstract

Existing gesture segmentations use the backward spotting scheme that first detects the end point, then traces back to the start point and sends the extracted gesture segment to the hidden Markov model (HMM) for gesture recognition. This makes an inevitable time delay between the gesture segmentation and recognition and is not appropriate for continuous gesture recognition. To solve this problem, we propose a forward spotting scheme that executes gesture segmentation and recognition simultaneously. The start and end points of gestures are determined by zero crossing from negative to positive (or from positive to negative) of a competitive differential observation probability that is defined by the difference of observation probability between the maximal gesture and the non-gesture. We also propose the sliding window and accumulative HMMs. The former is used to alleviate the effect of incomplete feature extraction on the observation probability and the latter improves the gesture recognition rate greatly by accepting all accumulated gesture segments between the start and end points and deciding the gesture type by a majority vote of all intermediate recognition results. We use the predetermined association mapping to determine the 3D articulation data, which reduces the feature extraction time greatly. We apply the proposed simultaneous gesture segmentation and recognition method to recognize the upper-body gestures for controlling the curtains and lights in a smart home environment. Experimental results show that the proposed method has a good recognition rate of 95.42% for continuously changing gestures.