Automatic acquisition and initialization of articulated models
Machine Vision and Applications - Special issue: Human modeling, analysis, and synthesis
Modelling and estimating the pose of a human arm
Machine Vision and Applications - Special issue: Human modeling, analysis, and synthesis
Robust finger tracking for wearable computer interfacing
Proceedings of the 2001 workshop on Perceptive user interfaces
Browsing the environment with the SNAP&TELL wearable computer system
Personal and Ubiquitous Computing
Augmented Foam: A Tangible Augmented Reality for Product Design
ISMAR '05 Proceedings of the 4th IEEE/ACM International Symposium on Mixed and Augmented Reality
Efficient fingertip tracking and mouse pointer control for a human mouse
ICVS'03 Proceedings of the 3rd international conference on Computer vision systems
Finger tracking and gesture interfacing using the Nintendo® wiimote
Proceedings of the 48th Annual Southeast Regional Conference
Gaming for upper extremities rehabilitation
Proceedings of the 2nd Conference on Wireless Health
Three dimensional fingertip tracking in stereovision
ACIVS'05 Proceedings of the 7th international conference on Advanced Concepts for Intelligent Vision Systems
Sfax-Miracl hand database for contactless hand biometrics applications
ICISP'12 Proceedings of the 5th international conference on Image and Signal Processing
Proceedings of the 4th International Conference on Internet Multimedia Computing and Service
Hi-index | 0.00 |
We present a method for tracking the 3D position of a finger, using a single camera placed several meters away from the user. After skin detection, we use motion to identify the gesticulating arm. The finger point is found by analyzing the arm's outline. To derive a 3D trajectory, we first track 2D positions of the user's elbow and shoulder. Given that a human's upper arm and lower arm have consistent length, we observe that the possible locations of a finger and elbow form two spheres with constant radii. From the previously tracked body points, we can reconstruct these spheres, computing the 3D position of the elbow and finger. These steps are fully automated and do not require human intervention. The system presented can be used as a visualization tool, or as a user input interface, in cases when the user would rather not be constrained by the camera system.