Spatial and temporal pyramids for grammatical expression recognition of American sign language
Proceedings of the 11th international ACM SIGACCESS conference on Computers and accessibility
A framework for continuous multimodal sign language recognition
Proceedings of the 2009 international conference on Multimodal interfaces
Recognizing continuous grammatical marker facial gestures in sign language video
ACCV'10 Proceedings of the 10th Asian conference on Computer vision - Volume Part IV
Facial expressions in American sign language: Tracking and recognition
Pattern Recognition
Non-manual cues in automatic sign language recognition
Proceedings of the 4th International Conference on PErvasive Technologies Related to Assistive Environments
Non-manual cues in automatic sign language recognition
Personal and Ubiquitous Computing
Hi-index | 0.00 |
In the age of speech and voice recognition technologies, sign language recognition is an essential part of ensuring equal access for deaf people. To date, sign language recognition research has mostly ignored facial expressions that arise as part of a natural sign language discourse, even though they carry important grammatical and prosodic information. One reason is that tracking the motion and dynamics of expressions in human faces from video is a hard task, especially with the high number of occlusions from the signers’ hands. This paper presents a 3D deformable model tracking system to address this problem, and applies it to sequences of native signers, taken from the National Center of Sign Language and Gesture Resources (NCSLGR), with a special emphasis on outlier rejection methods to handle occlusions. The experiments conducted in this paper validate the output of the face tracker against expert human annotations of the NCSLGR corpus, demonstrate the promise of the proposed face tracking framework for sign language data, and reveal that the tracking framework picks up properties that ideally complement human annotations for linguistic research.