Real-time stylistic prediction for whole-body human motions

  • Authors:
  • Takamitsu Matsubara;Sang-Ho Hyon;Jun Morimoto

  • Affiliations:
  • Graduate School of Information Science, NAIST, 8916-5, Takayama-cho, Ikoma, Nara, 630-0101, Japan and Department of Brain Robot Interface, ATR-CNS, 2-2-2, Hikaridai, Seika-cho, Soraku-gun, Kyoto, ...;Department of Brain Robot Interface, ATR-CNS, 2-2-2, Hikaridai, Seika-cho, Soraku-gun, Kyoto, 619-0288, Japan and Department of Robotics, Ritsumeikan University, 1-1-1, Nojihigashi, Kusatsu, Shiga ...;Department of Brain Robot Interface, ATR-CNS, 2-2-2, Hikaridai, Seika-cho, Soraku-gun, Kyoto, 619-0288, Japan

  • Venue:
  • Neural Networks
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

The ability to predict human motion is crucial in several contexts such as human tracking by computer vision and the synthesis of human-like computer graphics. Previous work has focused on off-line processes with well-segmented data; however, many applications such as robotics require real-time control with efficient computation. In this paper, we propose a novel approach called real-time stylistic prediction for whole-body human motions to satisfy these requirements. This approach uses a novel generative model to represent a whole-body human motion including rhythmic motion (e.g., walking) and discrete motion (e.g., jumping). The generative model is composed of a low-dimensional state (phase) dynamics and a two-factor observation model, allowing it to capture the diversity of motion styles in humans. A real-time adaptation algorithm was derived to estimate both state variables and style parameter of the model from non-stationary unlabeled sequential observations. Moreover, with a simple modification, the algorithm allows real-time adaptation even from incomplete (partial) observations. Based on the estimated state and style, a future motion sequence can be accurately predicted. In our implementation, it takes less than 15ms for both adaptation and prediction at each observation. Our real-time stylistic prediction was evaluated for human walking, running, and jumping behaviors.