Video-based reconstruction of animatable human characters

  • Authors:
  • Carsten Stoll;Juergen Gall;Edilson de Aguiar;Sebastian Thrun;Christian Theobalt

  • Affiliations:
  • MPI Informatik;ETH Zurich;Disney Research;Stanford University;MPI Informatik

  • Venue:
  • ACM SIGGRAPH Asia 2010 papers
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a new performance capture approach that incorporates a physically-based cloth model to reconstruct a rigged fully-animatable virtual double of a real person in loose apparel from multi-view video recordings. Our algorithm only requires a minimum of manual interaction. Without the use of optical markers in the scene, our algorithm first reconstructs skeleton motion and detailed time-varying surface geometry of a real person from a reference video sequence. These captured reference performance data are then analyzed to automatically identify non-rigidly deforming pieces of apparel on the animated geometry. For each piece of apparel, parameters of a physically-based real-time cloth simulation model are estimated, and surface geometry of occluded body regions is approximated. The reconstructed character model comprises a skeleton-based representation for the actual body parts and a physically-based simulation model for the apparel. In contrast to previous performance capture methods, we can now also create new real-time animations of actors captured in general apparel.