Goal-directed, dynamic animation of human walking
SIGGRAPH '89 Proceedings of the 16th annual conference on Computer graphics and interactive techniques
Simulating humans: computer graphics animation and control
Simulating humans: computer graphics animation and control
SIGGRAPH '94 Proceedings of the 21st annual conference on Computer graphics and interactive techniques
Pfinder: Real-Time Tracking of the Human Body
IEEE Transactions on Pattern Analysis and Machine Intelligence
ACM SIGGRAPH 98 Conference abstracts and applications
Real-time translation of human motion from video to animation
ACM SIGGRAPH 99 Conference abstracts and applications
Computer Animation: Theory and Practice
Computer Animation: Theory and Practice
W4S: A real-time system detecting and tracking people in 2 1/2D
ECCV '98 Proceedings of the 5th European Conference on Computer Vision-Volume I - Volume I
3D Part Recognition Method for Human Motion Analysis
CAPTECH '98 Proceedings of the International Workshop on Modelling and Motion Capture Techniques for Virtual Environments
Local and Global Skeleton Fitting Techniques for Optical Motion Capture
CAPTECH '98 Proceedings of the International Workshop on Modelling and Motion Capture Techniques for Virtual Environments
Incremental Tracking of Human Actions from Multiple Views
CVPR '98 Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Dynamic Models of Human Motion
FG '98 Proceedings of the 3rd. International Conference on Face & Gesture Recognition
Proceedings of the 2002 ACM SIGGRAPH/Eurographics symposium on Computer animation
Image based active model adaptation method for face reconstruction and sketch generation
Edutainment'06 Proceedings of the First international conference on Technologies for E-Learning and Digital Entertainment
Hi-index | 0.00 |
This paper proposes a motion generator approach to translating human motion from video image sequences to computer animations in real-time. In the motion generator approach, a motion generator makes inferences on the current human motion and/or posture from the data obtained by processing the source video images, and then generates and sends a set of joint angles to the target human body model. Compared with the existing motion capture approach, our approach is more robust, and tolerant of broader environmental and postural conditions. Experiments on a prototype system show that an animated virtual human can walk, sit, and lie as the real human performs without special illuminations control.