Animated statues

  • Authors:
  • Jonathan Starck;Gordon Collins;Raymond Smith;Adrian Hilton;John Illingworth

  • Affiliations:
  • Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford, GU2 7XH, UK;Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford, GU2 7XH, UK;Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford, GU2 7XH, UK;Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford, GU2 7XH, UK;Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford, GU2 7XH, UK

  • Venue:
  • Machine Vision and Applications - Special issue: Human modeling, analysis, and synthesis
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we present a layered framework for the animation of high-resolution human geometry captured using active 3D sensing technology. Commercial scanning systems can now acquire highly accurate surface data across the whole-body. However, the result is a dense, irregular, surface mesh without any structure for animation. We introduce a model-based approach to animating a scanned data-set by matching a generic humanoid control model to the surface data. A set of manually defined feature points are used to define body and facial pose, and a novel shape constrained matching algorithm is presented to deform the control model to match the scanned shape. This model-based approach allows the detailed specification of surface animation to be defined once for the generic model and re-applied to any captured scan. The detail of the high-resolution geometry is represented as a displacement map on the surface of the control model, providing smooth reconstruction of detailed shape on the animated control surface. The generic model provides animation control over the scan data-set, and the displacement map provides control of the high-resolution surface for editing geometry or level of detail in reconstruction or compression.