Virtual reality-based facial expressions understanding for teenagers with autism

  • Authors:
  • Esubalew Bekele;Zhi Zheng;Amy Swanson;Julie Davidson;Zachary Warren;Nilanjan Sarkar

  • Affiliations:
  • Electrical Engineering and Computer Science Department, Vanderbilt University, Nashville, TN;Electrical Engineering and Computer Science Department, Vanderbilt University, Nashville, TN;Treatment and Research in Autism Spectrum Disorder (TRIAD), Vanderbilt University, Nashville, TN;Pediatrics and Psychiatry Department, Vanderbilt University, Nashville, TN and Treatment and Research in Autism Spectrum Disorder (TRIAD), Vanderbilt University, Nashville, TN;Pediatrics and Psychiatry Department, Vanderbilt University, Nashville, TN and Treatment and Research in Autism Spectrum Disorder (TRIAD), Vanderbilt University, Nashville, TN;Mechanical Engineering Department, Vanderbilt University, Nashville, TN and Electrical Engineering and Computer Science Department, Vanderbilt University, Nashville, TN

  • Venue:
  • UAHCI'13 Proceedings of the 7th international conference on Universal Access in Human-Computer Interaction: user and context diversity - Volume 2
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Technology-enabled intervention has the potential to individualize and improve outcomes of traditional intervention. Specifically, virtual reality (VR) technology has been proposed in the virtual training of core social and communication skills that are impaired in individuals with autism. Various studies have demonstrated that children with autism have slow and atypical processing of emotional faces, which could be due to their atypical underlying neural structure. Emotional face recognition is considered among the core building blocks of social communication and early impairment in this skill has consequence on later complex language and communication skills. This work proposed a VR-based facial emotion recognition mechanism in the presence of contextual storytelling. Results from a usability study support the idea that individuals with autism may employ different facial processing strategies. The results are discussed in the context of the applicability of multimodal processing to enable adaptive VR-based systems in delivering individualized intervention.