Real-time classification of evoked emotions using facial feature tracking and physiological responses

  • Authors:
  • Jeremy N. Bailenson;Emmanuel D. Pontikakis;Iris B. Mauss;James J. Gross;Maria E. Jabon;Cendri A. C. Hutcherson;Clifford Nass;Oliver John

  • Affiliations:
  • Department of Communication, Stanford University, Stanford, CA 94305, USA;Department of Computer Science, Stanford University, Stanford, CA 94305, USA;Department of Psychology, 2155 South Race Street, University of Denver, Denver, CO 80208, USA;Department of Psychology, Stanford University, Stanford, CA 94305, USA;Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA;Department of Psychology, Stanford University, Stanford, CA 94305, USA;Department of Communication, Stanford University, Stanford, CA 94305, USA;Department of Psychology, University of California, Berkeley, CA 94720, USA

  • Venue:
  • International Journal of Human-Computer Studies
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present automated, real-time models built with machine learning algorithms which use videotapes of subjects' faces in conjunction with physiological measurements to predict rated emotion (trained coders' second-by-second assessments of sadness or amusement). Input consisted of videotapes of 41 subjects watching emotionally evocative films along with measures of their cardiovascular activity, somatic activity, and electrodermal responding. We built algorithms based on extracted points from the subjects' faces as well as their physiological responses. Strengths of the current approach are (1) we are assessing real behavior of subjects watching emotional videos instead of actors making facial poses, (2) the training data allow us to predict both emotion type (amusement versus sadness) as well as the intensity level of each emotion, (3) we provide a direct comparison between person-specific, gender-specific, and general models. Results demonstrated good fits for the models overall, with better performance for emotion categories than for emotion intensity, for amusement ratings than sadness ratings, for a full model using both physiological measures and facial tracking than for either cue alone, and for person-specific models than for gender-specific or general models.