Realistic Animation Using Extended Adaptive Mesh for Model Based Coding

  • Authors:
  • Lijun Yin;Anup Basu

  • Affiliations:
  • -;-

  • Venue:
  • EMMCVPR '99 Proceedings of the Second International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition
  • Year:
  • 1999

Quantified Score

Hi-index 0.01

Visualization

Abstract

Accurate localization and tracking of facial features are crucial for developing high quality model-based coding (MPEG-4) systems. For teleconferencing applications at very low bit rates, it is necessary to track eye and lip movements accurately over time. These movements can be coded and transmitted to a remote site, where animation techniques can be used to synthesize facial movements on a model of a face. In this paper we describe the integration of simple heuristics which are effective in improving the results of well-known facial feature detection with robust techniques for adapting a dynamic mesh for animation. A new method of generating a self-adaptive mesh using an extended dynamic mesh (EDM) is proposed to overcome the convergence problem of the dynamic-motion-equation method (DMM). The new method consisting of two-step mesh adaptation (called coarse-to-fine adaptation) can enhance the stability of the DMM and improve the performance of the adaptive process. The accuracy of the proposed approach is demonstrated by experiments on eye model animation. In this paper, we focus our discussion only on the detection, tracking, modeling and animation of eye movements.