Adaptive filter theory (3rd ed.)
Adaptive filter theory (3rd ed.)
Proceedings of the 27th annual conference on Computer graphics and interactive techniques
Motion texture: a two-level statistical model for character motion synthesis
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Stochastic Tracking of 3D Human Figures Using 2D Image Motion
ECCV '00 Proceedings of the 6th European Conference on Computer Vision-Part II
Style-based inverse kinematics
ACM SIGGRAPH 2004 Papers
Performance animation from low-dimensional control signals
ACM SIGGRAPH 2005 Papers
Style translation for human motion
ACM SIGGRAPH 2005 Papers
Priors for People Tracking from Small Training Sets
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1 - Volume 01
Separating Style and Content with Bilinear Models
Neural Computation
On-line EM Algorithm for the Normalized Gaussian Network
Neural Computation
GI '06 Proceedings of Graphics Interface 2006
3D People Tracking with Gaussian Process Dynamical Models
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1
Multifactor Gaussian process models for style-content separation
Proceedings of the 24th international conference on Machine learning
Gaussian Process Dynamical Models for Human Motion
IEEE Transactions on Pattern Analysis and Machine Intelligence
Factored conditional restricted Boltzmann Machines for modeling motion style
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Hi-index | 0.00 |
The ability to predict human motion is crucial in several contexts such as human tracking by computer vision and the synthesis of human-like computer graphics. Previous work has focused on off-line processes with well-segmented data; however, many applications such as robotics require real-time control with efficient computation. In this paper, we propose a novel approach called real-time stylistic prediction for whole-body human motions to satisfy these requirements. This approach uses a novel generative model to represent a whole-body human motion including rhythmic motion (e.g., walking) and discrete motion (e.g., jumping). The generative model is composed of a low-dimensional state (phase) dynamics and a two-factor observation model, allowing it to capture the diversity of motion styles in humans. A real-time adaptation algorithm was derived to estimate both state variables and style parameter of the model from non-stationary unlabeled sequential observations. Moreover, with a simple modification, the algorithm allows real-time adaptation even from incomplete (partial) observations. Based on the estimated state and style, a future motion sequence can be accurately predicted. In our implementation, it takes less than 15ms for both adaptation and prediction at each observation. Our real-time stylistic prediction was evaluated for human walking, running, and jumping behaviors.