WordsEye: an automatic text-to-scene conversion system
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
BLEU: a method for automatic evaluation of machine translation
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
Evaluation metrics for generation
INLG '00 Proceedings of the first international conference on Natural language generation - Volume 14
Generating american sign language classifier predicates for english-to-asl machine translation
Generating american sign language classifier predicates for english-to-asl machine translation
Evaluating American Sign Language generation through the participation of native ASL signers
Proceedings of the 9th international ACM SIGACCESS conference on Computers and accessibility
Design and evaluation of an American Sign Language generator
EmbodiedNLP '07 Proceedings of the Workshop on Embodied Language Processing
A Linguistically Motivated Model for Speed and Pausing in Animations of American Sign Language
ACM Transactions on Accessible Computing (TACCESS)
Technology for supporting web information search and learning in Sign Language
Interacting with Computers
IEEE Transactions on Audio, Speech, and Language Processing
Modeling animations of American Sign Language verbs through motion-capture of native ASL signers
ACM SIGACCESS Accessibility and Computing
Using mobile devices to support communication between emergency medical responders and deaf people
Proceedings of the 12th international conference on Human computer interaction with mobile devices and services
Collecting a motion-capture corpus of American Sign Language for data-driven generation research
SLPAT '10 Proceedings of the NAACL HLT 2010 Workshop on Speech and Language Processing for Assistive Technologies
Modeling and synthesizing spatially inflected verbs for American sign language animations
Proceedings of the 12th international ACM SIGACCESS conference on Computers and accessibility
Why is the creation of a virtual signer challenging computer animation?
MIG'10 Proceedings of the Third international conference on Motion in games
Data-Driven Synthesis of Spatially Inflected Verbs for American Sign Language Animation
ACM Transactions on Accessible Computing (TACCESS)
Sign language avatars: animation and comprehensibility
IVA'11 Proceedings of the 10th international conference on Intelligent virtual agents
The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility
Assessing the deaf user perspective on sign language avatars
The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility
Synthesizing mood-affected signed messages: Modifications to the parametric synthesis
International Journal of Human-Computer Studies
Toward developing a very big sign language parallel corpus
ICCHP'12 Proceedings of the 13th international conference on Computers Helping People with Special Needs - Volume Part II
Effect of presenting video as a baseline during an american sign language animation user study
Proceedings of the 14th international ACM SIGACCESS conference on Computers and accessibility
Learning a vector-based model of American Sign Language inflecting verbs from motion-capture data
SLPAT '12 Proceedings of the Third Workshop on Speech and Language Processing for Assistive Technologies
Manual evaluation of synthesised sign language avatars
Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility
Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility
Effect of Displaying Human Videos During an Evaluation Study of American Sign Language Animation
ACM Transactions on Accessible Computing (TACCESS)
Evaluating facial expressions in american sign language animations for accessible online information
UAHCI'13 Proceedings of the 7th international conference on Universal Access in Human-Computer Interaction: design methods, tools, and interaction techniques for eInclusion - Volume Part I
Hi-index | 0.00 |
There are many important factors in the design of evaluation studies for systems that generate animations of American Sign Language (ASL) sentences, and techniques for evaluating natural language generation of written texts are not easily adapted to ASL. When conducting user-based evaluations, several cultural and linguistic characteristics of members of the American Deaf community must be taken into account so as to ensure the accuracy of evaluations involving these users. This article describes an implementation and user-based evaluation (by native ASL signers) of a prototype ASL natural language generation system that produces sentences containing classifier predicates, which are frequent and complex spatial phenomena that previous ASL generators have not produced. Native signers preferred the system's output to Signed English animations -- scoring it higher in grammaticality, understandability, and naturalness of movement. They were also more successful at a comprehension task after viewing the system's classifier predicate animations.