A framework for recognizing the simultaneous aspects of American sign language
Computer Vision and Image Understanding - Modeling people toward vision-based underatanding of a person's shape, appearance, and movement
Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Recovering 3D Human Pose from Monocular Images
IEEE Transactions on Pattern Analysis and Machine Intelligence
Tracking Using Dynamic Programming for Appearance-Based Sign Language Recognition
FGR '06 Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition
Hidden Conditional Random Fields for Gesture Recognition
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
Rapid Signer Adaptation for Isolated Sign Language Recognition
CVPRW '06 Proceedings of the 2006 Conference on Computer Vision and Pattern Recognition Workshop
Real Time Large Vocabulary Continuous Sign Language Recognition Based on OP/Viterbi Algorithm
ICPR '06 Proceedings of the 18th International Conference on Pattern Recognition - Volume 03
Tracking People by Learning Their Appearance
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
LSESpeak: A spoken language generator for Deaf people
Expert Systems with Applications: An International Journal
Methodology for developing an advanced communications system for the Deaf in a new domain
Knowledge-Based Systems
Hi-index | 0.00 |
In automatic sign language translation, one of the main problems is the usage of spatial information in sign language and its proper representation and translation, e.g. the handling of spatial reference points in the signing space. Such locations are encoded at static points in signing space as spatial references for motion events. We present a new approach starting from a large vocabulary speech recognition system which is able to recognize sentences of continuous sign language speaker independently. The manual features obtained from the tracking are passed to the statistical machine translation system to improve its accuracy. On a publicly available benchmark database, we achieve a competitive recognition performance and can similarly improve the translation performance by integrating the tracking features.