Real-Time American Sign Language Recognition Using Desk and Wearable Computer Based Video
IEEE Transactions on Pattern Analysis and Machine Intelligence
Text input for mobile devices: comparing model prediction to actual performance
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
A framework for recognizing the simultaneous aspects of American sign language
Computer Vision and Image Understanding - Modeling people toward vision-based underatanding of a person's shape, appearance, and movement
Segmentation of the face and hands in sign language video sequences using color and motion cues
IEEE Transactions on Circuits and Systems for Video Technology
Variable frame rate for low power mobile sign language communication
Proceedings of the 9th international ACM SIGACCESS conference on Computers and accessibility
Supporting medical conversations between deaf and hearing individuals with tabletop displays
Proceedings of the 2008 ACM conference on Computer supported cooperative work
SAICSIT '10 Proceedings of the 2010 Annual Research Conference of the South African Institute of Computer Scientists and Information Technologists
Assessing the deaf user perspective on sign language avatars
The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility
Evaluating quality and comprehension of real-time sign language video on mobile phones
The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility
Using mass video notification methods to assist deaf people
Proceedings of the South African Institute of Computer Scientists and Information Technologists Conference on Knowledge, Innovation and Leadership in a Diverse, Multidisciplinary Environment
VisualComm: a tool to support communication between deaf and hearing persons with the Kinect
Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility
Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility
Proceedings of the 4th Annual Symposium on Computing for Development
Hi-index | 0.00 |
For Deaf people, access to the mobile telephone network in the United States is currently limited to text messaging, forcing communication in English as opposed to American Sign Language (ASL), the preferred language. Because ASL is a visual language, mobile video phones have the potential to give Deaf people access to real-time mobile communication in their preferred language. However, even today's best video compression techniques can not yield intelligible ASL at limited cell phone network bandwidths. Motivated by this constraint, we conducted one focus group and one user study with members of the Deaf Community to determine the intelligibility effects of video compression techniques that exploit the visual nature of sign language. Inspired by eyetracking results that show high resolution foveal vision is maintained around the face, we studied region-of-interest encodings (where the face is encoded at higher quality) as well as reduced frame rates (where fewer, better quality, frames are displayed every second). At all bit rates studied here, participants preferred moderate quality increases in the face region, sacrificing quality in other regions. They also preferred slightly lower frame rates because they yield better quality frames for a fixed bit rate. These results show promise for realtime access to the current cell phone network through signlanguage-specific encoding techniques.