The impact of eye gaze on communication using humanoid avatars
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Generating american sign language classifier predicates for english-to-asl machine translation
Generating american sign language classifier predicates for english-to-asl machine translation
Providing signed content on the Internet by synthesized animation
ACM Transactions on Computer-Human Interaction (TOCHI)
Evaluation of American Sign Language Generation by Native ASL Signers
ACM Transactions on Accessible Computing (TACCESS)
Collecting a motion-capture corpus of American Sign Language for data-driven generation research
SLPAT '10 Proceedings of the NAACL HLT 2010 Workshop on Speech and Language Processing for Assistive Technologies
Modeling and synthesizing spatially inflected verbs for American sign language animations
Proceedings of the 12th international ACM SIGACCESS conference on Computers and accessibility
Perceptual evaluation of human animation timewarping
ACM SIGGRAPH ASIA 2010 Sketches
ACM Transactions on Interactive Intelligent Systems (TiiS)
Sign language avatars: animation and comprehensibility
IVA'11 Proceedings of the 10th international conference on Intelligent virtual agents
The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility
Assessing the deaf user perspective on sign language avatars
The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility
Improving deaf accessibility in remote usability testing
The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility
Effect of spatial reference and verb inflection on the usability of sign language animations
Universal Access in the Information Society
Learning a vector-based model of American Sign Language inflecting verbs from motion-capture data
SLPAT '12 Proceedings of the Third Workshop on Speech and Language Processing for Assistive Technologies
Models of linguistic facial expressions for American Sign Language animation
ACM SIGACCESS Accessibility and Computing
Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility
Effect of Displaying Human Videos During an Evaluation Study of American Sign Language Animation
ACM Transactions on Accessible Computing (TACCESS)
Collecting and evaluating the CUNY ASL corpus for research on American Sign Language animation
Computer Speech and Language
Hi-index | 0.00 |
Animations of American Sign Language (ASL) have accessibility benefits for many signers with lower levels of written language literacy. Our lab has conducted several prior studies to evaluate synthesized ASL animations by asking native signers to watch different versions of animations and to answer comprehension and subjective questions about them. As an upper baseline, we used an animation of a virtual human carefully created by a human animator who is a native ASL signer. Considering whether to instead use videos of human signers as an upper baseline, we wanted to quantify how including a video upper baseline would affect how participants evaluate the ASL animations presented in a study. In this paper, we replicate a user study we conducted two years ago, with one difference: replacing our original animation upper baseline with a video of a human signer. We found that adding a human video upper baseline depressed the subjective Likert-scale scores that participants assign to the other stimuli (the synthesized animations) in the study when viewed side-by-side. This paper provides methodological guidance for how to design user studies evaluating sign language animations and facilitates comparison of studies that have used different upper baselines.