Tessa, a system to aid communication with deaf people
Proceedings of the fifth international ACM conference on Assistive technologies
A Machine Translation System from English to American Sign Language
AMTA '00 Proceedings of the 4th Conference of the Association for Machine Translation in the Americas on Envisioning Machine Translation in the Information Future
Automatic verb agreement in computer synthesized depictions of american sign language
Automatic verb agreement in computer synthesized depictions of american sign language
Generating american sign language classifier predicates for english-to-asl machine translation
Generating american sign language classifier predicates for english-to-asl machine translation
Universal Access in the Information Society
A knowledge-based sign synthesis architecture
Universal Access in the Information Society
Evaluation of American Sign Language Generation by Native ASL Signers
ACM Transactions on Accessible Computing (TACCESS)
Toward the Study of Sign Language Coarticulation: Methodology Proposal
ACHI '09 Proceedings of the 2009 Second International Conferences on Advances in Computer-Human Interactions
Modeling and synthesizing spatially inflected verbs for American sign language animations
Proceedings of the 12th international ACM SIGACCESS conference on Computers and accessibility
Effect of spatial reference and verb inflection on the usability of sign language animations
Universal Access in the Information Society
Effect of presenting video as a baseline during an american sign language animation user study
Proceedings of the 14th international ACM SIGACCESS conference on Computers and accessibility
Effect of Displaying Human Videos During an Evaluation Study of American Sign Language Animation
ACM Transactions on Accessible Computing (TACCESS)
Collecting and evaluating the CUNY ASL corpus for research on American Sign Language animation
Computer Speech and Language
Hi-index | 0.00 |
American Sign Language (ASL) synthesis software can improve the accessibility of information and services for deaf individuals with low English literacy. The synthesis component of current ASL animation generation and scripting systems have limited handling of the many ASL verb signs whose movement path is inflected to indicate 3D locations in the signing space associated with discourse referents. Using motion-capture data recorded from human signers, we model how the motion-paths of verb signs vary based on the location of their subject and object. This model yields a lexicon for ASL verb signs that is parameterized on the 3D locations of the verb's arguments; such a lexicon enables more realistic and understandable ASL animations. A new model presented in this paper, based on identifying the principal movement vector of the hands, shows improvement in modeling ASL verb signs, including when trained on movement data from a different human signer.