Tessa, a system to aid communication with deaf people
Proceedings of the fifth international ACM conference on Assistive technologies
Using Multiple Sensors for Mobile Sign Language Recognition
ISWC '03 Proceedings of the 7th IEEE International Symposium on Wearable Computers
IEEE Transactions on Pattern Analysis and Machine Intelligence
Providing signed content on the Internet by synthesized animation
ACM Transactions on Computer-Human Interaction (TOCHI)
A knowledge-based sign synthesis architecture
Universal Access in the Information Society
Evaluation of a psycholinguistically motivated timing model for animations of american sign language
Proceedings of the 10th international ACM SIGACCESS conference on Computers and accessibility
A Linguistically Motivated Model for Speed and Pausing in Animations of American Sign Language
ACM Transactions on Accessible Computing (TACCESS)
GSLC: creation and annotation of a Greek sign language corpus for HCI
UAHCI'07 Proceedings of the 4th international conference on Universal access in human computer interaction: coping with diversity
Accurate and Accessible Motion-Capture Glove Calibration for Sign Language Data Collection
ACM Transactions on Accessible Computing (TACCESS)
ACM SIGACCESS Accessibility and Computing
Collecting an american sign language corpus through the participation of native signers
UAHCI'11 Proceedings of the 6th international conference on Universal access in human-computer interaction: applications and services - Volume Part IV
Hi-index | 0.00 |
Many deaf adults in the U.S. have difficulty reading written English text; computer animations of American Sign Language (ASL) can improve these individuals' access to information, communication, and services. Current ASL animation technology cannot automatically generate expressions in which the signer associates locations in space with entities under discussion, nor can it generate many ASL signs whose movements are modified based on these locations. To determine how important such phenomena are to user-satisfaction and the comprehension of animations by deaf individuals, we conducted a study in which native ASL signers evaluated ASL animations with and without entity-representing spatial phenomena. We found that the inclusion of these expressions in the repertoire of ASL animation systems led to a significant improvement in user comprehension of the animations, thereby motivating future research on automatically generating such ASL spatial expressions.