Technology for just-in-time in-situ learning of facial affect for persons diagnosed with an autism spectrum disorder

  • Authors:
  • Miriam Madsen;Rana el Kaliouby;Matthew Goodwin;Rosalind Picard

  • Affiliations:
  • MIT Media Lab, Cambridge, MA, USA;MIT Media Lab, Cambridge, MA, USA;Groden Center, Providence, RI, USA;MIT Media Lab, Cambridge, MA, USA

  • Venue:
  • Proceedings of the 10th international ACM SIGACCESS conference on Computers and accessibility
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Many first-hand accounts from individuals diagnosed with autism spectrum disorders (ASD) highlight the challenges inherent in processing high-speed, complex, and unpredictable social information such as facial expressions in real-time. In this paper, we describe a new technology aimed at helping people capture, analyze, and reflect on a set of social-emotional signals communicated by facial and head movements in live social interaction that occurs with their everyday social companions. We describe our development of a new combination of hardware using a miniature camera connected to an ultramobile PC together with custom software developed to track, capture, interpret, and intuitively present various interpretations of the facial-head movements (e.g., presenting that there is a high probability the person looks "confused"). This paper describes this new technology together with the results of a series of pilot studies conducted with adolescents diagnosed with ASD who used the technology in their peer-group setting and contributed to its development via their feedback.