A Parallel Multistream Model for Integration of Sign Language Recognition and Lip Motion
ICMI '00 Proceedings of the Third International Conference on Advances in Multimodal Interfaces
Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Sign Language Spotting with a Threshold Model Based on Conditional Random Fields
IEEE Transactions on Pattern Analysis and Machine Intelligence
ICPR '10 Proceedings of the 2010 20th International Conference on Pattern Recognition
American sign language recognition with the kinect
ICMI '11 Proceedings of the 13th international conference on multimodal interfaces
Sign language recognition using sub-units
The Journal of Machine Learning Research
Hi-index | 0.10 |
The sign language is composed of two categories of signals: manual signals such as signs and fingerspellings and non-manual ones such as body gestures and facial expressions. This paper proposes a new method for recognizing manual signals and facial expressions as non-manual signals. The proposed method involves the following three steps: First, a hierarchical conditional random field is used to detect candidate segments of manual signals. Second, the BoostMap embedding method is used to verify hand shapes of segmented signs and to recognize fingerspellings. Finally, the support vector machine is used to recognize facial expressions as non-manual signals. This final step is taken when there is some ambiguity in the previous two steps. The experimental results indicate that the proposed method can accurately recognize the sign language at an 84% rate based on utterance data.