Data-Driven Synthesis of Spatially Inflected Verbs for American Sign Language Animation

  • Authors:
  • Pengfei Lu;Matt Huenerfauth

  • Affiliations:
  • The City University of New York, Graduate Center;The City University of New York, Queens College

  • Venue:
  • ACM Transactions on Accessible Computing (TACCESS)
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

We are studying techniques for producing realistic and understandable animations of American Sign Language (ASL); such animations have accessibility benefits for signers with lower levels of written language literacy. This article describes and evaluates a novel method for modeling and synthesizing ASL animations based on samples of ASL signs collected from native signers. We apply this technique to ASL inflecting verbs, common signs in which the location and orientation of the hands is influenced by the arrangement of locations in 3D space that represent entities under discussion. We train mathematical models of hand movement on animation data of signs produced by a native signer. In evaluation studies with native ASL signers, the verb animations synthesized from our model had similar subjective-rating and comprehension-question scores to animations produced by a human animator; they also achieved higher scores than baseline animations. Further, we examine a split modeling technique for accommodating certain verb signs with complex movement patterns, and we conduct an analysis of how robust our modeling techniques are to reductions in the size of their training data. The modeling techniques in this article are applicable to other types of ASL signs and to other sign languages used internationally. Our models’ parameterization of sign animations can increase the repertoire of generation systems and can partially automate the work of humans using sign language scripting systems.