An Approach Based on Phonemes to Large Vocabulary Chinese Sign Language Recognition
FGR '02 Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition
Providing signed content on the Internet by synthesized animation
ACM Transactions on Computer-Human Interaction (TOCHI)
Universal Access in the Information Society
A knowledge-based sign synthesis architecture
Universal Access in the Information Society
Evaluation of a psycholinguistically motivated timing model for animations of american sign language
Proceedings of the 10th international ACM SIGACCESS conference on Computers and accessibility
Toward the Study of Sign Language Coarticulation: Methodology Proposal
ACHI '09 Proceedings of the 2009 Second International Conferences on Advances in Computer-Human Interactions
A Linguistically Motivated Model for Speed and Pausing in Animations of American Sign Language
ACM Transactions on Accessible Computing (TACCESS)
Proceedings of the 11th international ACM SIGACCESS conference on Computers and accessibility
GSLC: creation and annotation of a Greek sign language corpus for HCI
UAHCI'07 Proceedings of the 4th international conference on Universal access in human computer interaction: coping with diversity
Collecting a motion-capture corpus of American Sign Language for data-driven generation research
SLPAT '10 Proceedings of the NAACL HLT 2010 Workshop on Speech and Language Processing for Assistive Technologies
Modeling and synthesizing spatially inflected verbs for American sign language animations
Proceedings of the 12th international ACM SIGACCESS conference on Computers and accessibility
ACM Transactions on Interactive Intelligent Systems (TiiS)
Data-Driven Synthesis of Spatially Inflected Verbs for American Sign Language Animation
ACM Transactions on Accessible Computing (TACCESS)
The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility
A dynamic gesture recognition system for the Korean sign language (KSL)
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Effect of presenting video as a baseline during an american sign language animation user study
Proceedings of the 14th international ACM SIGACCESS conference on Computers and accessibility
Learning a vector-based model of American Sign Language inflecting verbs from motion-capture data
SLPAT '12 Proceedings of the Third Workshop on Speech and Language Processing for Assistive Technologies
Hi-index | 0.00 |
While there is great potential for sign language animation generation software to improve the accessibility of information for deaf individuals with low written-language literacy, the understandability of current sign language animation systems is limited. Data-driven methodologies using annotated sign language corpora encoding detailed human movement have enabled some researchers to address several key linguistic challenges in ASL generation. This article motivates and describes our current research on collecting a motion-capture corpus of American Sign Language (ASL). As an evaluation of our motion-capture configuration, calibration, and recording protocol, we have conducted several rounds of evaluation studies with native ASL signers, and we have made use of our collected data to synthesize novel animations of ASL, which have also been evaluated in experimental studies with native signers.