I3D '01 Proceedings of the 2001 symposium on Interactive 3D graphics
On-line locomotion generation based on motion blending
Proceedings of the 2002 ACM SIGGRAPH/Eurographics symposium on Computer animation
Verbs and Adverbs: Multidimensional Motion Interpolation
IEEE Computer Graphics and Applications
Tessa, a system to aid communication with deaf people
Proceedings of the fifth international ACM conference on Assistive technologies
A Machine Translation System from English to American Sign Language
AMTA '00 Proceedings of the 4th Conference of the Association for Machine Translation in the Americas on Envisioning Machine Translation in the Information Future
Visualizing Quaternions (The Morgan Kaufmann Series in Interactive 3D Technology)
Visualizing Quaternions (The Morgan Kaufmann Series in Interactive 3D Technology)
Generating american sign language classifier predicates for english-to-asl machine translation
Generating american sign language classifier predicates for english-to-asl machine translation
Universal Access in the Information Society
A knowledge-based sign synthesis architecture
Universal Access in the Information Society
Evaluation of American Sign Language Generation by Native ASL Signers
ACM Transactions on Accessible Computing (TACCESS)
Toward the Study of Sign Language Coarticulation: Methodology Proposal
ACHI '09 Proceedings of the 2009 Second International Conferences on Advances in Computer-Human Interactions
A Combined Semantic and Motion Capture Database for Real-Time Sign Language Synthesis
IVA '09 Proceedings of the 9th International Conference on Intelligent Virtual Agents
Collecting a motion-capture corpus of American Sign Language for data-driven generation research
SLPAT '10 Proceedings of the NAACL HLT 2010 Workshop on Speech and Language Processing for Assistive Technologies
Modeling and synthesizing spatially inflected verbs for American sign language animations
Proceedings of the 12th international ACM SIGACCESS conference on Computers and accessibility
Collecting and evaluating the CUNY ASL corpus for research on American Sign Language animation
Computer Speech and Language
Hi-index | 0.00 |
We are studying techniques for producing realistic and understandable animations of American Sign Language (ASL); such animations have accessibility benefits for signers with lower levels of written language literacy. This article describes and evaluates a novel method for modeling and synthesizing ASL animations based on samples of ASL signs collected from native signers. We apply this technique to ASL inflecting verbs, common signs in which the location and orientation of the hands is influenced by the arrangement of locations in 3D space that represent entities under discussion. We train mathematical models of hand movement on animation data of signs produced by a native signer. In evaluation studies with native ASL signers, the verb animations synthesized from our model had similar subjective-rating and comprehension-question scores to animations produced by a human animator; they also achieved higher scores than baseline animations. Further, we examine a split modeling technique for accommodating certain verb signs with complex movement patterns, and we conduct an analysis of how robust our modeling techniques are to reductions in the size of their training data. The modeling techniques in this article are applicable to other types of ASL signs and to other sign languages used internationally. Our models’ parameterization of sign animations can increase the repertoire of generation systems and can partially automate the work of humans using sign language scripting systems.