Modeling animations of American Sign Language verbs through motion-capture of native ASL signers

  • Authors:
  • Pengfei Lu

  • Affiliations:
  • The City University of New York (CUNY), New York, NY

  • Venue:
  • ACM SIGACCESS Accessibility and Computing
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Software to generate American Sign Language (ASL) automatically can provide benefits for deaf people with low English literacy. However, modern computational linguistic software cannot produce important aspects of ASL signs and verbs. Better models of spatially complex signs are needed. Our goals are: to create a linguistic resource of ASL signs via motion-capture data collection; to model the movement paths of inflecting/indicating verbs using machine learning and computational techniques; and to produce grammatical, natural looking and understandable animations of ASL. Our methods include linguistic annotation of the data and evaluation by native ASL signers. This summary also describes our research progress.