Segmented gesture recognition for controlling character animation

  • Authors:
  • En-Wei Huang;Li-Chen Fu

  • Affiliations:
  • National Taiwan University;National Taiwan University

  • Venue:
  • Proceedings of the 2008 ACM symposium on Virtual reality software and technology
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we propose a method which uses vision-based gesture recognition to control character animation. Each animation sequence has a corresponding gesture to be recognized, and we focus on upper-body motions and use one camera to capture images. Human gestures are modeled by a learned graph model whose nodes are key frames of these gestures. The animation sequences are pre-processed to generate a motion graph, and the mapping between the gesture model and the animation motion graph is created. At run time, the recognized node sequence in the gesture model will guide the animation to traverse the animation motion graph. Our method avoids complex process of completely reconstructing the human motion and still holds the advantages such as being intuitive, quickly responsive and versatile. The proposed method can be applied to control avatar actions in a large virtual environment. Our experiments show that the segmented gesture recognition can robustly control the animation with quick response even there are ambiguities in the initial poses of some gestures.