Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review
IEEE Transactions on Pattern Analysis and Machine Intelligence
Statistical color models with application to skin detection
International Journal of Computer Vision
Video-Based Sign Language Recognition Using Hidden Markov Models
Proceedings of the International Gesture Workshop on Gesture and Sign Language in Human-Computer Interaction
Vision-Based Gesture Recognition: A Review
GW '99 Proceedings of the International Gesture Workshop on Gesture-Based Communication in Human-Computer Interaction
Coupled hidden Markov models for complex action recognition
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
Real-time American Sign Language recognition from video using hidden Markov models
ISCV '95 Proceedings of the International Symposium on Computer Vision
A real-time head nod and shake detector
Proceedings of the 2001 workshop on Perceptive user interfaces
Multi-Scale Gesture Recognition from Time-Varying Contours
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1 - Volume 01
Gesture Recognition using Hidden Markov Models from Fragmented Observations
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1
Hidden Conditional Random Fields for Gesture Recognition
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
A Unified Framework for Gesture Recognition and Spatiotemporal Gesture Segmentation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hi-index | 0.00 |
Gesture sequences typically have a common set of distinct internal sub-structures which can be shared across the gestures. In this paper, we propose a method using a generative model to learn these common actions which we refer to as sub-gestures, and in-turn perform recognition. Our proposed model learns sub-gestures by sharing parameters between gesture models. We evaluated our method on the Palm Graffiti digits-gesture dataset and showed that the model with shared parameters outperformed the same model without the shared parameters. Also, we labeled different observation sequences thereby intuitively showing how sub-gestures are related to complete gestures.