Expression mimicking: from 2D monocular sequences to 3D animations

  • Authors:
  • Charlotte Ghys;Maxime Taron;Nikos Paragios;Nikos Komodakis;Bénédicte Bascle

  • Affiliations:
  • MAS, Ecole Centrale Paris, Chatenay-Malabry, France and Orange-France Telecom R&D, Lannion, France;MAS, Ecole Centrale Paris, Chatenay-Malabry, France;MAS, Ecole Centrale Paris, Chatenay-Malabry, France;MAS, Ecole Centrale Paris, Chatenay-Malabry, France;Orange-France Telecom R&D, Lannion, France

  • Venue:
  • ISVC'07 Proceedings of the 3rd international conference on Advances in visual computing - Volume Part II
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we present a novel approach for mimicking expressions in 3D from a monocular video sequence. To this end, first we construct a high resolution semantic mesh model through automatic global and local registration of a low resolution range data. Such a model is represented using predefined set of control points in a compact fashion, and animated using radial basis functions. In order to recover the 2D positions of the 3D control points in the observed sequence, we use cascade Adaboost-driven search. The search space is reduced through the use of predictive expression modeling. The optimal configuration of the Adaboost responses is determined using combinatorial linear programming which enforces the anthropometric nature of the model. Then the displacement can be reproduced on any version of the model, registered on another face. Our method doesn't require dense stereo estimation and can then produce realistic animations, using any 3D model. Promising experimental results demonstrate the potential of our approach.