MPEG-4 facial expression synthesis

  • Authors:
  • L. Malatesta;A. Raouzaiou;K. Karpouzis;S. Kollias

  • Affiliations:
  • Image, Video and Multimedia Systems Laboratory, National Technical University of Athens, Zografou, Greece 15780;Image, Video and Multimedia Systems Laboratory, National Technical University of Athens, Zografou, Greece 15780;Image, Video and Multimedia Systems Laboratory, National Technical University of Athens, Zografou, Greece 15780;Image, Video and Multimedia Systems Laboratory, National Technical University of Athens, Zografou, Greece 15780

  • Venue:
  • Personal and Ubiquitous Computing
  • Year:
  • 2009

Quantified Score

Hi-index 0.01

Visualization

Abstract

The current work will describe an approach to synthesize expressions, including intermediate ones, via the tools provided in the MPEG-4 standard based on real measurements and on universally accepted assumptions of their meaning, taking into account results of Whissel's study. Additionally, MPEG-4 facial animation parameters are used in order to evaluate theoretical predictions for intermediate expressions of a given emotion episode, based on Scherer's appraisal theory. MPEG-4 FAPs and action units are combined in modeling the effects of appraisal checks on facial expressions and temporal evolution issues of facial expressions are investigated. The results of the synthesizing process can then be applied to Embodied Conversational Agents (ECAs), rendering their interaction with humans, or other ECAs, more affective.