Manipulating Video Sequences to Determine the Components of Conversational Facial Expressions

  • Authors:
  • Douglas W. Cunningham;Mario Kleiner;Christian Wallraven;Heinrich H. Bülthoff

  • Affiliations:
  • Max Planck Institute for Biological Cybernetics, Tübingen, Germany;Max Planck Institute for Biological Cybernetics, Tübingen, Germany;Max Planck Institute for Biological Cybernetics, Tübingen, Germany;Max Planck Institute for Biological Cybernetics, Tübingen, Germany

  • Venue:
  • ACM Transactions on Applied Perception (TAP)
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Communication plays a central role in everday life. During an average conversation, information is exchanged in a variety of ways, including through facial motion. Here, we employ a custom, model-based image manipulation technique to selectively “freez ” portions of a face in video recordings in order to determine the areas that are sufficient for proper recognition of nine conversational expressions. The results show that most expressions rely primarily on a single facial area to convey meaning with different expressions using different areas. The results also show that the combination of rigid head, eye, eyebrow, and mouth motions is sufficient to produce expressions that are as easy to recognize as the original, unmanipulated recordings. Finally, the results show that the manipulation technique introduced few perceptible artifacts into the altered video sequences. This fusion of psychophysics and computer graphics techniques provides not only fundamental insights into human perception and cognition, but also yields the basis for a systematic description of what needs to move in order to produce realistic, recognizable conversational facial animations.