Improving Spatial Reference in American Sign Language Animation through Data Collection from Native ASL Signers

  • Authors:
  • Matt Huenerfauth

  • Affiliations:
  • Computer Science, CUNY Queens College Computer Science, CUNY Graduate Center, The City University of New York (CUNY), Flushing, USA NY 11367

  • Venue:
  • UAHCI '09 Proceedings of the 5th International Conference on Universal Access in Human-Computer Interaction. Part III: Applications and Services
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Many deaf adults in the U.S. have difficulty reading written English text; computer animations of American Sign Language (ASL) can improve these individuals' access to information, communication, and services. Current ASL animation technology cannot automatically generate expressions in which the signer associates locations in space with entities under discussion, nor can it generate many ASL signs whose movements are modified based on these locations. To determine how important such phenomena are to user-satisfaction and the comprehension of animations by deaf individuals, we conducted a study in which native ASL signers evaluated ASL animations with and without entity-representing spatial phenomena. We found that the inclusion of these expressions in the repertoire of ASL animation systems led to a significant improvement in user comprehension of the animations, thereby motivating future research on automatically generating such ASL spatial expressions.