Evaluation of the CyberGlove as a whole-hand input device
ACM Transactions on Computer-Human Interaction (TOCHI)
Tessa, a system to aid communication with deaf people
Proceedings of the fifth international ACM conference on Assistive technologies
A Real-Time Continuous Gesture Recognition System for Sign Language
FG '98 Proceedings of the 3rd. International Conference on Face & Gesture Recognition
An Approach Based on Phonemes to Large Vocabulary Chinese Sign Language Recognition
FGR '02 Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition
Using Multiple Sensors for Mobile Sign Language Recognition
ISWC '03 Proceedings of the 7th IEEE International Symposium on Wearable Computers
'Visual-Fidelity' Dataglove Calibration
CGI '04 Proceedings of the Computer Graphics International
Universal Access in the Information Society
Taiwan sign language (TSL) recognition based on 3D data and neural networks
Expert Systems with Applications: An International Journal
A Linguistically Motivated Model for Speed and Pausing in Animations of American Sign Language
ACM Transactions on Accessible Computing (TACCESS)
UAHCI '09 Proceedings of the 5th International Conference on Universal Access in Human-Computer Interaction. Part III: Applications and Services
Proceedings of the 11th international ACM SIGACCESS conference on Computers and accessibility
Modeling animations of American Sign Language verbs through motion-capture of native ASL signers
ACM SIGACCESS Accessibility and Computing
Collecting a motion-capture corpus of American Sign Language for data-driven generation research
SLPAT '10 Proceedings of the NAACL HLT 2010 Workshop on Speech and Language Processing for Assistive Technologies
UM'05 Proceedings of the 10th international conference on User Modeling
A dynamic gesture recognition system for the Korean sign language (KSL)
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Data-driven glove calibration for hand motion capture
Proceedings of the 12th ACM SIGGRAPH/Eurographics Symposium on Computer Animation
Hi-index | 0.00 |
Motion-capture recordings of sign language are used in research on automatic recognition of sign language or generation of sign language animations, which have accessibility applications for deaf users with low levels of written-language literacy. Motion-capture gloves are used to record the wearer’s handshape. Unfortunately, they require a time-consuming and inexact calibration process each time they are worn. This article describes the design and evaluation of a new calibration protocol for motion-capture gloves, which is designed to make the process more efficient and to be accessible for participants who are deaf and use American Sign Language (ASL). The protocol was evaluated experimentally; deaf ASL signers wore the gloves, were calibrated (using the new protocol and using a calibration routine provided by the glove manufacturer), and were asked to perform sequences of ASL handshapes. Five native ASL signers rated the correctness and understandability of the collected handshape data. In an additional evaluation, ASL signers were asked to perform ASL stories while wearing the gloves and a motion-capture bodysuit (in some cases our new calibration protocol was used, in other cases, the standard protocol). Later, twelve native ASL signers watched animations produced from this motion-capture data and answered comprehension questions about the stories. In both evaluation studies, the new protocol received significantly higher scores than the standard calibration. The protocol has been made freely available online, and it includes directions for the researcher, images and videos of how participants move their hands during the process, and directions for participants (as ASL videos and English text).