Intuitive Interactive Human-Character Posing with Millions of Example Poses

  • Authors:
  • Xiaolin Wei;Jinxiang Chai

  • Affiliations:
  • Texas A&M University;Texas A&M University

  • Venue:
  • IEEE Computer Graphics and Applications
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

The authors present a data-driven algorithm for interactive 3D human-character posing. They formulate the problem in a maximum a posteriori (MAP) framework by combining the user's inputs with the priors embedded in prerecorded human poses. Maximizing the posterior probability lets them generate a most-likely human pose that satisfies the user constraints. The system can learn priors from a huge, heterogeneous human-motion-capture database (2.8 million prerecorded poses) and use them to generate a wide range of natural poses. No previous data-driven character-posing system has demonstrated this capability. In addition, the authors present two intuitive interfaces for interactive human-character posing: direct-manipulation interfaces and sketching interfaces. They show their system's superiority compared to standard inverse-kinematics techniques and alternative data-driven techniques.