Combining emotion and facial nonmanual signals in synthesized american sign language

  • Authors:
  • Jerry C. Schnepp;Rosalee J. Wolfe;John C. McDonald;Jorge A. Toro

  • Affiliations:
  • DePaul University, Chicago, IL, USA;DePaul University, Chicago, IL, USA;DePaul University, Chicago, IL, USA;Worchester Polytechnic Institute, Worchester, MA, USA

  • Venue:
  • Proceedings of the 14th international ACM SIGACCESS conference on Computers and accessibility
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Translating from English to American Sign Language (ASL) requires an avatar to display synthesized ASL. Essential to the language are nonmanual signals that appear on the face. Previous avatars were hampered by an inability to portray emotion and facial nonmanual signals that occur at the same time. A new animation system addresses this challenge. Animations produced by the new system were tested with 40 members of the Deaf community in the United States. For each animation, participants were able to identify both nonmanual signals and emotional states. Co-occurring question nonmanuals and affect information were distinguishable, which is particularly striking because the two processes can move an avatar's brows in opposing directions.