Mean Shift: A Robust Approach Toward Feature Space Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Real-Time Fingertip Tracking and Gesture Recognition
IEEE Computer Graphics and Applications
Extracting Curvilinear Structures: A Differential Geometric Approach
ECCV '96 Proceedings of the 4th European Conference on Computer Vision-Volume I - Volume I
Multiscale detection of curvilinear structures in 2-D and 3-D image data
ICCV '95 Proceedings of the Fifth International Conference on Computer Vision
Bare-hand human-computer interaction
Proceedings of the 2001 workshop on Perceptive user interfaces
Model-Based Hand Tracking Using a Hierarchical Bayesian Filter
IEEE Transactions on Pattern Analysis and Machine Intelligence
Handy AR: Markerless Inspection of Augmented Reality Objects Using Fingertip Tracking
ISWC '07 Proceedings of the 2007 11th IEEE International Symposium on Wearable Computers
FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
User interaction evolution in the SmartFactoryKL
BCS '10 Proceedings of the 24th BCS Interaction Specialist Group Conference
A wearable visuo-inertial interface for humanoid robot control
Proceedings of the 8th ACM/IEEE international conference on Human-robot interaction
Hi-index | 0.00 |
The biggest challenge in hand detection and tracking is the high dimensionality of the hand's kinematic configuration space of about 30 degrees of freedom, which leads to a huge variance in its projections. This makes it difficult to come to a tractable model of the hand as a whole. To overcome this problem, we suggest to concentrate on posture invariant local constraints, that exist on finger appearances. We show that, besides skin color, there is a number of additional geometric and photometric invariants. This paper presents a novel approach to real-time hand detection and tracking by selecting local regions that comply with these posture invariants. While most existing methods for hand tracking rely on a color based segmentation as a first preprocessing step, we integrate color cues at the end of our processing chain in a robust manner. We show experimentally that our approach still performs robustly above cluttered background, when using extremely low quality skin color information. With this we can avoid a user- and lighting-specific calibration of skin color before tracking.