Modeling and synthesizing spatially inflected verbs for American sign language animations

  • Authors:
  • Matt Huenerfauth;Pengfei Lu

  • Affiliations:
  • City University of New York, New York, NY, USA;City University of New York, New York, NY, USA

  • Venue:
  • Proceedings of the 12th international ACM SIGACCESS conference on Computers and accessibility
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Animations of American Sign Language (ASL) have accessibility benefits for many signers with lower levels of written language literacy. This paper introduces a novel method for modeling and synthesizing ASL animations based on movement data collected from native signers. This technique allows for the synthesis of animations of signs (in particular, inflecting verbs, which are frequent in ASL) whose performance is affected by the arrangement of locations in 3D space that represent entities under discussion. Mathematical models of hand movement are trained on examples of signs produced by a human animator. Animations of ASL synthesized from the model were judged to be of similar quality to animations produced by a human animator, and these animations led to higher comprehension scores (than baseline approaches limited to selecting signs from a finite dictionary) in an evaluation study conducted with 18 native signers. This novel technique is applicable to ASL or other sign languages. It can significantly increase the repertoire of generation systems and can partially automate the work of humans using scripting systems.