Mapping optical motion capture data to skeletal motion using a physical model

  • Authors:
  • Victor Brian Zordan;Nicholas C. Van Der Horst

  • Affiliations:
  • University of California, Riverside;University of California, Riverside

  • Venue:
  • Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

Motion capture has become a premiere technique for animation of humanlike characters. To facilitate its use, researchers have focused on the manipulation of data for retargeting, editing, combining, and reusing motion capture libraries. In many of these efforts joint angle plus root trajectories are used as input, although this format requires an inherent mapping from the raw data recorded by many popular motion capture set-ups. In this paper, we propose a novel solution to this mapping problem from 3D marker position data recorded by optical motion capture systems to joint trajectories for a fixed limb-length skeleton using a forward dynamic model. To accomplish the mapping, we attach virtual springs to marker positions located on the appropriate landmarks of a physical simulation and apply resistive torques to the skeleton's joints using a simple controller. For the motion capture samples, joint-angle postures are resolved from the simulation's equilibrium state, based on the internal torques and external forces. Additional constraints, such as foot plants and hand holds, may also be treated as addition forces applied to the system and are a trivial and natural extension to the proposed technique. We present results for our approach as applied to several motion-captured behaviors.