Whole-hand input
Visual analysis of high DOF articulated objects with application to hand tracking
Visual analysis of high DOF articulated objects with application to hand tracking
Towards 3D hand tracking using a deformable model
FG '96 Proceedings of the 2nd International Conference on Automatic Face and Gesture Recognition (FG '96)
Extraction of 3D Hand Shape and Posture from Image Sequences for Sign Language Recognition
AMFG '03 Proceedings of the IEEE International Workshop on Analysis and Modeling of Faces and Gestures
3-D hand posture recognition by training contour variation
FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
Robust person-independent visual sign language recognition
IbPRIA'05 Proceedings of the Second Iberian conference on Pattern Recognition and Image Analysis - Volume Part I
Hi-index | 0.00 |
We present a model-based method for hand posture recognition in monocular image sequences that measures joint angles, viewing angle, and position in space. Visual markers in form of a colored cotton glove are used to extract descriptive and stable 2D features. Searching a synthetically generated database of 2.6 million entries, each consisting of 3D hand posture parameters and the corresponding 2D features, yields several candidate postures per frame. This ambiguity is resolved by exploiting temporal continuity between successive frames. The method is robust to noise, can be used from any viewing angle, and places no constraints on the hand posture. Self-occlusion of any number of markers is handled. It requires no initialization and retrospectively corrects posture errors when accordant information becomes available. Besides a qualitative evaluation on real images, a quantitative performance measurement using a large amount of synthetic input data featuring various degrees of noise shows the effectiveness of the approach.