Perceiving visual emotions with speech

  • Authors:
  • Zhigang Deng;Jeremy Bailenson;J. P. Lewis;Ulrich Neumann

  • Affiliations:
  • Department of Computer Science, University of Houston, Houston, TX;Department of Communication, Stanford University, CA;Computer Graphics Lab, Stanford University, CA;Department of Computer Science, University of Southern California, Los Angeles, CA

  • Venue:
  • IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Embodied Conversational Agents (ECAs) with realistic faces are becoming an intrinsic part of many graphics systems employed in HCI applications. A fundamental issue is how people visually perceive the affect of a speaking agent. In this paper we present the first study evaluating the relation between objective and subjective visual perception of emotion as displayed on a speaking human face, using both full video and sparse point-rendered representations of the face. We found that objective machine learning analysis of facial marker motion data is correlated with evaluations made by experimental subjects, and in particular, the lower face region provides insightful emotion clues for visual emotion perception. We also found that affect is captured in the abstract point-rendered representation.