Three-dimensional computer vision: a geometric viewpoint
Three-dimensional computer vision: a geometric viewpoint
Pfinder: Real-Time Tracking of the Human Body
IEEE Transactions on Pattern Analysis and Machine Intelligence
Parametric Hidden Markov Models for Gesture Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
3-D model-based tracking of humans in action: a multi-view approach
CVPR '96 Proceedings of the 1996 Conference on Computer Vision and Pattern Recognition (CVPR '96)
Tracking People with Twists and Exponential Maps
CVPR '98 Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
W4: Who? When? Where? What? A Real Time System for Detecting and Tracking People
FG '98 Proceedings of the 3rd. International Conference on Face & Gesture Recognition
Monocular tracking of the human arm in 3D
ICCV '95 Proceedings of the Fifth International Conference on Computer Vision
Understanding people pointing: the Perseus system
ISCV '95 Proceedings of the International Symposium on Computer Vision
Vision-Based Overhead View Person Recognition
ICPR '00 Proceedings of the International Conference on Pattern Recognition - Volume 1
Single camera pointing gesture recognition for interaction in edutainment applications
Proceedings of the 24th Spring Conference on Computer Graphics
Real-time pointing gesture recognition for an immersive environment
FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
Visual tracking for seamless 3d interactions in augmented reality
ISVC'05 Proceedings of the First international conference on Advances in Visual Computing
Visual estimation of pointed targets for robot guidance via fusion of face pose and hand orientation
Computer Vision and Image Understanding
Hi-index | 0.00 |
In this work we describe a set of visual routines, which support a novel sensor free interface between a human and virtual objects. The visual routines detect, track and interpret a gesture of pointing in real time. This is solved in the context of a scenario, which enables a user to activate virtual objects displayed on a projective screen. By changing a direction of pointing with an extended towards the screen arm, the user controls the motion of virtual objects. The vision system consists of a single overhead view camera and exploits a priori knowledge of the human body appearance, interactive context and environment. The system operates in real time on a standard Pentium-PC platform.