Fundamentals of speech recognition
Fundamentals of speech recognition
SIGGRAPH '95 Proceedings of the 22nd annual conference on Computer graphics and interactive techniques
A survey of computer vision-based human motion capture
Computer Vision and Image Understanding - Modeling people toward vision-based underatanding of a person's shape, appearance, and movement
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Vision-Based Gesture Recognition: A Review
GW '99 Proceedings of the International Gesture Workshop on Gesture-Based Communication in Human-Computer Interaction
Silhouette Analysis-Based Gait Recognition for Human Identification
IEEE Transactions on Pattern Analysis and Machine Intelligence
Performance animation from low-dimensional control signals
ACM SIGGRAPH 2005 Papers
Learning silhouette features for control of human motion
ACM Transactions on Graphics (TOG)
Proceedings of the 2006 ACM SIGGRAPH/Eurographics symposium on Computer animation
Guided time warping for motion editing
SCA '07 Proceedings of the 2007 ACM SIGGRAPH/Eurographics symposium on Computer animation
Vision-based human motion analysis: An overview
Computer Vision and Image Understanding
Hi-index | 0.00 |
In this paper, we propose a method which uses vision-based gesture recognition to control character animation. Each animation sequence has a corresponding gesture to be recognized, and we focus on upper-body motions and use one camera to capture images. Human gestures are modeled by a learned graph model whose nodes are key frames of these gestures. The animation sequences are pre-processed to generate a motion graph, and the mapping between the gesture model and the animation motion graph is created. At run time, the recognized node sequence in the gesture model will guide the animation to traverse the animation motion graph. Our method avoids complex process of completely reconstructing the human motion and still holds the advantages such as being intuitive, quickly responsive and versatile. The proposed method can be applied to control avatar actions in a large virtual environment. Our experiments show that the segmented gesture recognition can robustly control the animation with quick response even there are ambiguities in the initial poses of some gestures.