Task-based evaluation of skin detection for communication and perceptual interfaces
Journal of Visual Communication and Image Representation
Computer Vision and Image Understanding
STARS: Sign tracking and recognition system using input-output HMMs
Pattern Recognition Letters
Generating 3D architectural models based on hand motion and gesture
Computers in Industry
Recognition of isolated complex mono- and bi-manual 3D hand gestures
FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
A fully automatic hand gesture recognition system for human-robot interaction
Proceedings of the Second Symposium on Information and Communication Technology
Dynamic hand gesture recognition: An exemplar-based approach from motion divergence fields
Image and Vision Computing
Empirical study of a vision-based depth-sensitive human-computer interaction system
Proceedings of the 10th asia pacific conference on Computer human interaction
Choosing and modeling the hand gesture database for a natural user interface
GW'11 Proceedings of the 9th international conference on Gesture and Sign Language in Human-Computer Interaction and Embodied Communication
Learning symbolic representations of hybrid dynamical systems
The Journal of Machine Learning Research
Design and usability analysis of gesture-based control for common desktop tasks
HCI'13 Proceedings of the 15th international conference on Human-Computer Interaction: interaction modalities and techniques - Volume Part IV
A Gestural Recognition Interface for Intelligent Wheelchair Users
International Journal of Sociotechnology and Knowledge Development
LaRED: a large RGB-D extensible hand gesture dataset
Proceedings of the 5th ACM Multimedia Systems Conference
Rule-based trajectory segmentation for modeling hand motion trajectory
Pattern Recognition
Hi-index | 0.00 |
A new hand gesture recognition method based on Input-Output Hidden Markov Models is presented. This method deals with the dynamic aspects of gestures. Gestures are extracted from a sequence of video images by tracking the skin-color blobs corresponding to the hand into a body-face space centered on the face of the user. Our goal is to recognize two classes of gestures: deictic and symbolic.