Eliciting, capturing and tagging spontaneous facialaffect in autism spectrum disorder

  • Authors:
  • Rana el Kaliouby;Alea Teeters

  • Affiliations:
  • Massachusetts Institute of Technology, Cambridge, MA;Massachusetts Institute of Technology, Cambridge, MA

  • Venue:
  • Proceedings of the 9th international conference on Multimodal interfaces
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

The emergence of novel affective technologies such as wearable interventions for individuals who have difficulties with social-emotional communication requires reliable, real-time processing of spontaneous expressions. This paper describes a novel wearable camera and a systematic methodology to elicit, capture and tag natural, yet experimentally controlled face videos in dyadic conversations. The MIT-Groden-Autism corpus is the first corpus of naturally-evoked facial expressions of individuals with and without Autism Spectrum Dis-orders (ASD), a growing population who have difficulties with social-emotion communication. It is also the largest in number and duration of the videos, and represents affective-cognitive states that extend beyond the basic emotions. We highlight the machine vision challenges inherent in processing such a corpus, including pose changes and pathological affective displays.