Real-Time American Sign Language Recognition Using Desk and Wearable Computer Based Video
IEEE Transactions on Pattern Analysis and Machine Intelligence
The Recognition of Human Movement Using Temporal Templates
IEEE Transactions on Pattern Analysis and Machine Intelligence
A framework for recognizing the simultaneous aspects of American sign language
Computer Vision and Image Understanding - Modeling people toward vision-based underatanding of a person's shape, appearance, and movement
Motion segmentation and pose recognition with motion history gradients
Machine Vision and Applications - Special issue: IEEE WACV
Real-Time Online Adaptive Gesture Recognition
RATFG-RTS '99 Proceedings of the International Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems
Sparse bayesian learning and the relevance vector machine
The Journal of Machine Learning Research
Definition and recovery of kinematic features for recognition of American sign language movements
Image and Vision Computing
Descriptive temporal template features for visual motion recognition
Pattern Recognition Letters
Sparse Bayesian modeling with adaptive kernel learning
IEEE Transactions on Neural Networks
Vision-based hand interaction and its application in pervasive games
Proceedings of the 8th International Conference on Virtual Reality Continuum and its Applications in Industry
Human action classification using SVM_2K classifier on motion features
MRCS'06 Proceedings of the 2006 international conference on Multimedia Content Representation, Classification and Security
Hi-index | 0.00 |
An approach to increase adaptability of a recognition system, which can recognise 10 elementary gestures and be extended to sign language recognition, is proposed. In this work, recognition is done by firstly extracting a motion gradient orientation image from a raw video input and then classifying a feature vector generated from this image to one of the 10 gestures by a sparse Bayesian classifier. The classifier is designed in a way that it supports online incremental learning and it can be thus re-trained to increase its adaptability to an input captured under a new condition. Experiments show that the accuracy of the classifier can be boosted from less than 40% to over 80% by re-training it using 5 newly captured samples from each gesture class. Apart from having a better adaptability, the system can work reliably in real-time and give a probabilistic output that is useful in complex motion analysis.