Motion texture: a two-level statistical model for character motion synthesis

  • Authors:
  • Yan Li;Tianshu Wang;Heung-Yeung Shum

  • Affiliations:
  • Microsoft Research, Asia, 3F Beijing Sigma Center, Haidian District, Beijing 100080, P.R. China;Xi'an Jiaotong University, P.R.China;Microsoft Research, Asia, 3F Beijing Sigma Center, Haidian District, Beijing 100080, P.R. China

  • Venue:
  • Proceedings of the 29th annual conference on Computer graphics and interactive techniques
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we describe a novel technique, called motion texture, for synthesizing complex human-figure motion (e.g., dancing) that is statistically similar to the original motion captured data. We define motion texture as a set of motion textons and their distribution, which characterize the stochastic and dynamic nature of the captured motion. Specifically, a motion texton is modeled by a linear dynamic system (LDS) while the texton distribution is represented by a transition matrix indicating how likely each texton is switched to another. We have designed a maximum likelihood algorithm to learn the motion textons and their relationship from the captured dance motion. The learnt motion texture can then be used to generate new animations automatically and/or edit animation sequences interactively. Most interestingly, motion texture can be manipulated at different levels, either by changing the fine details of a specific motion at the texton level or by designing a new choreography at the distribution level. Our approach is demonstrated by many synthesized sequences of visually compelling dance motion.