Artificial Intelligence
Automatic Detection of Relevant Head Gestures in American Sign Language Communication
ICPR '02 Proceedings of the 16 th International Conference on Pattern Recognition (ICPR'02) Volume 1 - Volume 1
Model Structure Selection and Training Algorithms for an HMM Gesture Recognition System
IWFHR '04 Proceedings of the Ninth International Workshop on Frontiers in Handwriting Recognition
Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Modeling Hesitation and Conflict: A Belief-Based Approach for Multi-class Problems
ICMLA '06 Proceedings of the 5th International Conference on Machine Learning and Applications
SignTutor: An Interactive System for Sign Language Tutoring
IEEE MultiMedia
International Journal of Approximate Reasoning
Probabilistic transformations of belief functions
ECSQARU'05 Proceedings of the 8th European conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty
A Generalization of the Pignistic Transform for Partial Bet
ECSQARU '09 Proceedings of the 10th European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty
Cooperative Sign Language Tutoring: A Multiagent Approach
ESAW '09 Proceedings of the 10th International Workshop on Engineering Societies in the Agents World X
Evidential combination of multiple HMM classifiers for multi-script handwriting recognition
IPMU'10 Proceedings of the Computational intelligence for knowledge-based systems design, and 13th international conference on Information processing and management of uncertainty
Constructing dynamic frames of discernment in cases of large number of classes
ECSQARU'11 Proceedings of the 11th European conference on Symbolic and quantitative approaches to reasoning with uncertainty
Hi-index | 0.01 |
Most of the research on sign language recognition concentrates on recognizing only manual signs (hand gestures and shapes), discarding a very important component: the non-manual signals (facial expressions and head/shoulder motion). We address the recognition of signs with both manual and non-manual components using a sequential belief-based fusion technique. The manual components, which carry information of primary importance, are utilized in the first stage. The second stage, which makes use of non-manual components, is only employed if there is hesitation in the decision of the first stage. We employ belief formalism both to model the hesitation and to determine the sign clusters within which the discrimination takes place in the second stage. We have implemented this technique in a sign tutor application. Our results on the eNTERFACE'06 ASL database show an improvement over the baseline system which uses parallel or feature fusion of manual and non-manual features: we achieve an accuracy of 81.6%.