Synthesizing expressions using facial feature point tracking: how emotion is conveyed

  • Authors:
  • Tadas Baltrušaitis;Laurel D. Riek;Peter Robinson

  • Affiliations:
  • University of Cambridge, Cambridge, United Kingdom;University of Cambridge, Cambridge, United Kingdom;University of Cambridge, Cambridge, United Kingdom

  • Venue:
  • Proceedings of the 3rd international workshop on Affective interaction in natural environments
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Many approaches to the analysis and synthesis of facial expressions rely on automatically tracking landmark points on human faces. However, this approach is usually chosen because of ease of tracking rather than its ability to convey affect. We have conducted an experiment that evaluated the perceptual importance of 22 such automatically tracked feature points in a mental state recognition task. The experiment compared mental state recognition rates of participants who viewed videos of human actors and synthetic characters (physical android robot, virtual avatar, and virtual stick figure drawings) enacting various facial expressions. All expressions made by the synthetic characters were automatically generated using the 22 tracked facial feature points on the videos of the human actors. Our results show no difference in accuracy across the three synthetic representations, however, all three were less accurate than the original human actor videos that generated them. Overall, facial expressions showing surprise were more easily identifiable than other mental states, suggesting that a geometric approach to synthesis may be better suited toward some mental states than others.