Controllable hand deformation from sparse examples with rich details

  • Authors:
  • Haoda Huang;Ling Zhao;KangKang Yin;Yue Qi;Yizhou Yu;Xin Tong

  • Affiliations:
  • Microsoft Research Asia;Beihang University;National University of Singapore;Beihang University;Univ. of Illinois at Urbana-Champaign;Microsoft Research Asia

  • Venue:
  • SCA '11 Proceedings of the 2011 ACM SIGGRAPH/Eurographics Symposium on Computer Animation
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recent advances in laser scanning technology have made it possible to faithfully scan a real object with tiny geometric details, such as pores and wrinkles. However, a faithful digital model should not only capture static details of the real counterpart but also be able to reproduce the deformed versions of such details. In this paper, we develop a data-driven model that has two components respectively accommodating smooth large-scale deformations and high-resolution deformable details. Large-scale deformations are based on a nonlinear mapping between sparse control points and bone transformations. A global mapping, however, would fail to synthesize realistic geometries from sparse examples, for highly-deformable models with a large range of motion. The key is to train a collection of mappings defined over regions locally in both the geometry and the pose space. Deformable fine-scale details are generated from a second nonlinear mapping between the control points and per-vertex displacements. We apply our modeling scheme to scanned human hand models. Experiments show that our deformation models, learned from extremely sparse training data, are effective and robust in synthesizing highly-deformable models with rich fine features, for keyframe animation as well as performance-driven animation. We also compare our results with those obtained by alternative techniques.