The impact of eye gaze on communication using humanoid avatars
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Judging Laura: perceived qualities of a mediated human versus an embodied agent
Lecture Notes in Computer Science
Generating american sign language classifier predicates for english-to-asl machine translation
Generating american sign language classifier predicates for english-to-asl machine translation
Providing signed content on the Internet by synthesized animation
ACM Transactions on Computer-Human Interaction (TOCHI)
Universal Access in the Information Society
A knowledge-based sign synthesis architecture
Universal Access in the Information Society
Evaluation of American Sign Language Generation by Native ASL Signers
ACM Transactions on Accessible Computing (TACCESS)
Evaluating the emotional content of human motions on real and virtual characters
Proceedings of the 5th symposium on Applied perception in graphics and visualization
Automatic Translation System to Spanish Sign Language with a Virtual Interpreter
INTERACT '09 Proceedings of the 12th IFIP TC 13 International Conference on Human-Computer Interaction: Part I
Collecting a motion-capture corpus of American Sign Language for data-driven generation research
SLPAT '10 Proceedings of the NAACL HLT 2010 Workshop on Speech and Language Processing for Assistive Technologies
Modeling and synthesizing spatially inflected verbs for American sign language animations
Proceedings of the 12th international ACM SIGACCESS conference on Computers and accessibility
Perceptual evaluation of human animation timewarping
ACM SIGGRAPH ASIA 2010 Sketches
ACM Transactions on Interactive Intelligent Systems (TiiS)
Sign language avatars: animation and comprehensibility
IVA'11 Proceedings of the 10th international conference on Intelligent virtual agents
The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility
Assessing the deaf user perspective on sign language avatars
The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility
Improving deaf accessibility in remote usability testing
The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility
Improvements and evaluations in sign animation used as instructions for stomach x-ray examination
ICCHP'06 Proceedings of the 10th international conference on Computers Helping People with Special Needs
Effect of spatial reference and verb inflection on the usability of sign language animations
Universal Access in the Information Society
Effect of presenting video as a baseline during an american sign language animation user study
Proceedings of the 14th international ACM SIGACCESS conference on Computers and accessibility
Learning a vector-based model of American Sign Language inflecting verbs from motion-capture data
SLPAT '12 Proceedings of the Third Workshop on Speech and Language Processing for Assistive Technologies
Evaluating facial expressions in american sign language animations for accessible online information
UAHCI'13 Proceedings of the 7th international conference on Universal Access in Human-Computer Interaction: design methods, tools, and interaction techniques for eInclusion - Volume Part I
Hi-index | 0.00 |
Many researchers internationally are studying how to synthesize computer animations of sign language; such animations have accessibility benefits for people who are deaf and have lower literacy in written languages. The field has not yet formed a consensus as to how to best conduct evaluations of the quality of sign language animations, and this article explores an important methodological issue for researchers conducting experimental studies with participants who are deaf. Traditionally, when evaluating an animation, some lower and upper baselines are shown for comparison during the study. For the upper baseline, some researchers use carefully produced animations, and others use videos of human signers. Specifically, this article investigates, in studies where signers view animations of sign language and are asked subjective and comprehension questions, whether participants differ in their subjective and comprehension responses when actual videos of human signers are shown during the study. Through three sets of experiments, we characterize how the Likert-scale subjective judgments of participants about sign language animations are negatively affected when they are also shown videos of human signers for comparison -- especially when displayed side-by-side. We also identify a small positive effect on the comprehension of sign language animations when studies also contain videos of human signers. Our results enable direct comparison of previously published evaluations of sign language animations that used different types of upper baselines -- video or animation. Our results also provide methodological guidance for researchers who are designing evaluation studies of sign language animation or designing experimental stimuli or questions for participants who are deaf.