Using vision, acoustics, and natural language for disambiguation
Proceedings of the ACM/IEEE international conference on Human-robot interaction
Visual recognition of pointing gestures for human-robot interaction
Image and Vision Computing
Mobile human-robot teaming with environmental tolerance
Proceedings of the 4th ACM/IEEE international conference on Human robot interaction
Self-Organizing Maps for Pose Estimation with a Time-of-Flight Camera
Dyn3D '09 Proceedings of the DAGM 2009 Workshop on Dynamic 3D Imaging
Three-dimensional mapping with time-of-flight cameras
Journal of Field Robotics - Three-Dimensional Mapping, Part 2
Hand gesture recognition with a novel IR time-of-flight range camera: a pilot study
MIRAGE'07 Proceedings of the 3rd international conference on Computer vision/computer graphics collaboration techniques
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
3D body pose estimation using an adaptive person model for articulated ICP
ICIRA'11 Proceedings of the 4th international conference on Intelligent Robotics and Applications - Volume Part II
Free-hand pointing for identification and interaction with distant objects
Proceedings of the 5th International Conference on Automotive User Interfaces and Interactive Vehicular Applications
Octree segmentation based calling gesture recognition for elderly care robot
Proceedings of the 8th International Conference on Ubiquitous Information Management and Communication
Hi-index | 0.00 |
Pointing gestures are a common and intuitive way to draw somebody's attention to a certain object. While humans can easily interpret robot gestures, the perception of human behavior using robot sensors is more difficult. In this work, we propose a method for perceiving pointing gestures using a Time-of-Flight (ToF) camera. To determine the intended pointing target, frequently the line between a person's eyes and hand is assumed to be the pointing direction. However, since people tend to keep the line-of-sight free while they are pointing, this simple approximation is inadequate. Moreover, depending on the distance and angle to the pointing target, the line between shoulder and hand or elbow and hand may yield better interpretations of the pointing direction. In order to achieve a better estimate, we extract a set of body features from depth and amplitude images of a ToF camera and train a model of pointing directions using Gaussian Process Regression. We evaluate the accuracy of the estimated pointing direction in a quantitative study. The results show that our learned model achieves far better accuracy than simple criteria like head-hand, shoulder-hand, or elbow-hand line.