Real-Time American Sign Language Recognition Using Desk and Wearable Computer Based Video
IEEE Transactions on Pattern Analysis and Machine Intelligence
The visual analysis of human movement: a survey
Computer Vision and Image Understanding
A framework for recognizing the simultaneous aspects of American sign language
Computer Vision and Image Understanding - Modeling people toward vision-based underatanding of a person's shape, appearance, and movement
ARGo: An Architecture for Sign Language Recognition and Interpretation
Proceedings of Gesture Workshop on Progress in Gestural Interaction
Lucas-Kanade 20 Years On: A Unifying Framework
International Journal of Computer Vision
Robust Real-Time Face Detection
International Journal of Computer Vision
The HumanID Gait Challenge Problem: Data Sets, Performance, and Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Shape-From-Silhouette Across Time Part I: Theory and Algorithms
International Journal of Computer Vision
Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Australian sign language recognition
Machine Vision and Applications
Tracking Using Dynamic Programming for Appearance-Based Sign Language Recognition
FGR '06 Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition
Hidden Conditional Random Fields for Gesture Recognition
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
Real Time Large Vocabulary Continuous Sign Language Recognition Based on OP/Viterbi Algorithm
ICPR '06 Proceedings of the 18th International Conference on Pattern Recognition - Volume 03
International Journal of Computer Vision
Journal of Cognitive Neuroscience
Large-Vocabulary Continuous Sign Language Recognition Based on Transition-Movement Models
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
ISOcat data categories for signed language resources
GW'11 Proceedings of the 9th international conference on Gesture and Sign Language in Human-Computer Interaction and Embodied Communication
Hi-index | 0.00 |
A survey of video databases that can be used within a continuous sign language recognition scenario to measure the performance of head and hand tracking algorithms either w.r.t. a tracking error rate or w.r.t. a word error rate criterion is presented in this work. Robust tracking algorithms are required as the signing hand frequently moves in front of the face, may temporarily disappear, or cross the other hand. Only few studies consider the recognition of continuous sign language, and usually special devices such as colored gloves or blue-boxing environments are used to accurately track the regions-of-interest in sign language processing. Ground-truth labels for hand and head positions have been annotated for more than 30k frames in several publicly available video databases of different degrees of difficulty, and preliminary tracking results are presented.