Learning a vector-based model of American Sign Language inflecting verbs from motion-capture data

  • Authors:
  • Pengfei Lu;Matt Huenerfauth

  • Affiliations:
  • City University of New York (CUNY), New York, NY;City University of New York (CUNY), Flushing, NY

  • Venue:
  • SLPAT '12 Proceedings of the Third Workshop on Speech and Language Processing for Assistive Technologies
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

American Sign Language (ASL) synthesis software can improve the accessibility of information and services for deaf individuals with low English literacy. The synthesis component of current ASL animation generation and scripting systems have limited handling of the many ASL verb signs whose movement path is inflected to indicate 3D locations in the signing space associated with discourse referents. Using motion-capture data recorded from human signers, we model how the motion-paths of verb signs vary based on the location of their subject and object. This model yields a lexicon for ASL verb signs that is parameterized on the 3D locations of the verb's arguments; such a lexicon enables more realistic and understandable ASL animations. A new model presented in this paper, based on identifying the principal movement vector of the hands, shows improvement in modeling ASL verb signs, including when trained on movement data from a different human signer.