Statistical color models with application to skin detection
International Journal of Computer Vision
IEEE Transactions on Pattern Analysis and Machine Intelligence
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Machine Vision and Applications
Bayesian Spatiotemporal Context Integration Sources in Robot Vision Systems
RoboCup 2008: Robot Soccer World Cup XII
A Unified Framework for Gesture Recognition and Spatiotemporal Gesture Segmentation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Real-time hand gesture detection and recognition using boosted classifiers and active learning
PSIVT'07 Proceedings of the 2nd Pacific Rim conference on Advances in image and video technology
Skin detection using neighborhood information
FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
Hand gesture recognition using depth data
FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
Learning 2D hand shapes using the topology preservation model GNG
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part I
Proceedings of the 11th Brazilian Symposium on Human Factors in Computing Systems
Hi-index | 0.00 |
In this article a hand gesture recognition system that allows interacting with a service robot, in dynamic environments and in real-time, is proposed. The system detects hands and static gestures using cascade of boosted classifiers, and recognize dynamic gestures by computing temporal statistics of the hand’s positions and velocities, and classifying these features using a Bayes classifier. The main novelty of the proposed approach is the use of context information to adapt continuously the skin model used in the detection of hand candidates, to restrict the image’s regions that need to be analyzed, and to cut down the number of scales that need to be considered in the hand-searching and gesture-recognition processes. The system performance is validated in real video sequences. In average the system recognized static gestures in 70% of the cases, dynamic gestures in 75% of them, and it runs at a variable speed of 5-10 frames per second.