ACM SIGACCESS Accessibility and Computing
Modeling and synthesizing spatially inflected verbs for American sign language animations
Proceedings of the 12th international ACM SIGACCESS conference on Computers and accessibility
Effect of presenting video as a baseline during an american sign language animation user study
Proceedings of the 14th international ACM SIGACCESS conference on Computers and accessibility
Learning a vector-based model of American Sign Language inflecting verbs from motion-capture data
SLPAT '12 Proceedings of the Third Workshop on Speech and Language Processing for Assistive Technologies
Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility
Effect of Displaying Human Videos During an Evaluation Study of American Sign Language Animation
ACM Transactions on Accessible Computing (TACCESS)
Hi-index | 0.00 |
Computer-generated animations of American Sign Language (ASL) can improve the accessibility of information, communication, and services for the significant number of deaf adults in the US with difficulty in reading English text. Unfortunately, there are several linguistic aspects of ASL that current automatic generation or translation systems cannot produce (or are time-consuming for human animators to create). To determine how important such phenomena are to user satisfaction and the comprehension of ASL animations, studies were conducted in which native ASL signers evaluated ASL animations with and without: establishment of spatial reference points around the virtual human signer representing entities under discussion, pointing pronoun signs, contrastive role shift, and spatial inflection of ASL verbs. It was found that adding these phenomena to ASL animations led to a significant improvement in user comprehension of the animations, thereby motivating future research on automating the generation of these animations.