Sign language recognition using model-based tracking and a 3D Hopfield neural network
Machine Vision and Applications
Real-Time American Sign Language Recognition Using Desk and Wearable Computer Based Video
IEEE Transactions on Pattern Analysis and Machine Intelligence
Extraction of 2D Motion Trajectories and Its Application to Hand Gesture Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Color-Based Hands Tracking System for Sign Language Recognition
FG '98 Proceedings of the 3rd. International Conference on Face & Gesture Recognition
Spatial and temporal pyramids for grammatical expression recognition of American sign language
Proceedings of the 11th international ACM SIGACCESS conference on Computers and accessibility
Human-inspired search for redundancy in automatic sign language recognition
ACM Transactions on Applied Perception (TAP)
Visual hand posture recognition in monocular image sequences
DAGM'06 Proceedings of the 28th conference on Pattern Recognition
A system for large vocabulary sign search
ECCV'10 Proceedings of the 11th European conference on Trends and Topics in Computer Vision - Volume Part I
Non-manual cues in automatic sign language recognition
Personal and Ubiquitous Computing
Hi-index | 0.00 |
Sign language recognition constitutes a challenging field of research in computer vision. Common problems like overlap, ambiguities, and minimal pairs occur frequently and require robust algorithms for feature extraction and processing. We present a system that performs person-dependent recognition of 232 isolated signs with an accuracy of 99.3% in a controlled environment. Person-independent recognition rates reach 44.1% for 221 signs. An average performance of 87.8% is achieved for six signers in various uncontrolled indoor and outdoor environments, using a reduced vocabulary of 18 signs. The system uses a background model to remove static areas from the input video on pixel level. In the tracking stage, multiple hypotheses are pursued in parallel to handle ambiguities and facilitate retrospective correction of errors. A winner hypothesis is found by applying high level knowledge of the human body, hand motion, and the signing process. Overlaps are resolved by template matching, exploiting temporally adjacent frames with no or less overlap. The extracted features are normalized for person-independence and robustness, and classified by Hidden Markov Models.