I3D '01 Proceedings of the 2001 symposium on Interactive 3D graphics
On-line locomotion generation based on motion blending
Proceedings of the 2002 ACM SIGGRAPH/Eurographics symposium on Computer animation
Verbs and Adverbs: Multidimensional Motion Interpolation
IEEE Computer Graphics and Applications
A Machine Translation System from English to American Sign Language
AMTA '00 Proceedings of the 4th Conference of the Association for Machine Translation in the Americas on Envisioning Machine Translation in the Information Future
Generating american sign language classifier predicates for english-to-asl machine translation
Generating american sign language classifier predicates for english-to-asl machine translation
Universal Access in the Information Society
A knowledge-based sign synthesis architecture
Universal Access in the Information Society
Evaluation of American Sign Language Generation by Native ASL Signers
ACM Transactions on Accessible Computing (TACCESS)
Toward the Study of Sign Language Coarticulation: Methodology Proposal
ACHI '09 Proceedings of the 2009 Second International Conferences on Advances in Computer-Human Interactions
Effect of spatial reference and verb inflection on the usability of sign language animations
Universal Access in the Information Society
ACM SIGACCESS Accessibility and Computing
Data-Driven Synthesis of Spatially Inflected Verbs for American Sign Language Animation
ACM Transactions on Accessible Computing (TACCESS)
Effect of presenting video as a baseline during an american sign language animation user study
Proceedings of the 14th international ACM SIGACCESS conference on Computers and accessibility
Learning a vector-based model of American Sign Language inflecting verbs from motion-capture data
SLPAT '12 Proceedings of the Third Workshop on Speech and Language Processing for Assistive Technologies
Effect of Displaying Human Videos During an Evaluation Study of American Sign Language Animation
ACM Transactions on Accessible Computing (TACCESS)
Collecting and evaluating the CUNY ASL corpus for research on American Sign Language animation
Computer Speech and Language
Hi-index | 0.00 |
Animations of American Sign Language (ASL) have accessibility benefits for many signers with lower levels of written language literacy. This paper introduces a novel method for modeling and synthesizing ASL animations based on movement data collected from native signers. This technique allows for the synthesis of animations of signs (in particular, inflecting verbs, which are frequent in ASL) whose performance is affected by the arrangement of locations in 3D space that represent entities under discussion. Mathematical models of hand movement are trained on examples of signs produced by a human animator. Animations of ASL synthesized from the model were judged to be of similar quality to animations produced by a human animator, and these animations led to higher comprehension scores (than baseline approaches limited to selecting signs from a finite dictionary) in an evaluation study conducted with 18 native signers. This novel technique is applicable to ASL or other sign languages. It can significantly increase the repertoire of generation systems and can partially automate the work of humans using scripting systems.