Real-time American Sign Language recognition from video using hidden Markov models
ISCV '95 Proceedings of the International Symposium on Computer Vision
Signer-Independent Sign Language Recognition Based on SOFM/HMM
RATFG-RTS '01 Proceedings of the IEEE ICCV Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems (RATFG-RTS'01)
Efficient Visual Event Detection Using Volumetric Features
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1 - Volume 01
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision - Volume 2
Appearance-Based recognition of words in american sign language
IbPRIA'05 Proceedings of the Second Iberian conference on Pattern Recognition and Image Analysis - Volume Part I
Sign Language Recognition: Working with Limited Corpora
UAHCI '09 Proceedings of the 5th International Conference on Universal Access in Human-Computer Interaction. Part III: Applications and Services
Human-computer intelligent interaction: a survey
HCI'07 Proceedings of the 2007 IEEE international conference on Human-computer interaction
A person independent system for recognition of hand postures used in sign language
Pattern Recognition Letters
Upper Body Detection and Tracking in Extended Signing Sequences
International Journal of Computer Vision
ECCV'10 Proceedings of the 11th European conference on Trends and Topics in Computer Vision - Volume Part I
Sign language recognition using sub-units
The Journal of Machine Learning Research
Non-manual cues in automatic sign language recognition
Personal and Ubiquitous Computing
Hi-index | 0.00 |
This paper presents an approach to large lexicon sign recognition that does not require tracking. This overcomes the issues of how to accurately track the hands through self occlusion in unconstrained video, instead opting to take a detection strategy, where patterns of motion are identified. It is demonstrated that detection can be achieved with only minor loss of accuracy compared to a perfectly tracked sequence using coloured gloves. The approach uses two levels of classification. In the first, a set of viseme classifiers detects the presence of sub-Sign units of activity. The second level then assembles visemes into word level Sign using Markov chains. The system is able to cope with a large lexicon and is more expandable than traditional word level approaches. Using as few as 5 training examples the proposed system has classification rates as high as 74.3% on a randomly selected 164 sign vocabulary performing at a comparable level to other tracking based systems.