Perceptually valid facial expression blending using expression units

  • Authors:
  • Ali Arya;Avi Parush;Alicia McMullan

  • Affiliations:
  • Carleton University;Carleton University;Carleton University

  • Venue:
  • ACM SIGGRAPH 2007 posters
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

The human face is a rich source of information regarding underlying emotional states. Facial expressions are crucial in showing the emotions as well as increasing the quality of communication and speech comprehension. The detailed study of facial actions involved in the expression of the six universal emotions [1] has helped the computer graphics community develop realistic facial animations. Yet the visual mechanisms by which these facial expressions are altered or combined to convey more subtle information remains less well understood by behavioural psychologists and animators. This lack of a strong theoretical basis for combining facial actions has resulted in the use of ad-hoc methods for blending facial expression in animations [2--3]. They mainly consider the facial movements for transient or combined expressions a simple mathematical function of the main expressions involved. The methods that have emerged are therefore computationally tractable, but the question of their "perceptual" and "psychological" validity has not yet been answered. Examples of such methods are "Sum of two expressions with or without limits," "Weighted averaging," and "MAX operator".