Modeling steering within above-the-surface interaction layers
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Vision-based human motion analysis: An overview
Computer Vision and Image Understanding
Real-time hand posture recognition using range data
Image and Vision Computing
Fusion of 2d and 3d sensor data for articulated body tracking
Robotics and Autonomous Systems
Interactions in the air: adding further depth to interactive tabletops
Proceedings of the 22nd annual ACM symposium on User interface software and technology
Beyond flat surface computing: challenges of depth-aware and curved interfaces
MM '09 Proceedings of the 17th ACM international conference on Multimedia
Combining multiple depth cameras and projectors for interactions on, above and between surfaces
UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology
Z-touch: an infrastructure for 3d gesture interaction in the proximity of tabletop surfaces
ACM International Conference on Interactive Tabletops and Surfaces
INTERACT'11 Proceedings of the 13th IFIP TC 13 international conference on Human-computer interaction - Volume Part III
Medusa: a proximity-aware multi-touch tabletop
Proceedings of the 24th annual ACM symposium on User interface software and technology
Real-time upper-body human pose estimation using a depth camera
MIRAGE'11 Proceedings of the 5th international conference on Computer vision/computer graphics collaboration techniques
Real-time human pose recognition in parts from single depth images
CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
Hi-index | 0.00 |
Digital tabletop environments offer a huge potential to realize application scenarios where multiple users interact simultaneously or aim to solve collaborative tasks. So far, research in this field focuses on touch and tangible interaction, which only takes place on the tabletop's surface. First approaches aim at involving the space above the surface, e.g., by employing freehand gestures. However, these are either limited to specific scenarios or employ obtrusive tracking solutions. In this paper, we propose an approach to unobtrusively segment and detect interaction above a digital surface using a depth sensing camera. To achieve this, we adapt a previously presented approach that segments arms in depth data from a front-view to a top-view setup facilitating the detection of hand positions. Moreover, we propose a novel algorithm to merge segments and give a comparison to the original segmentation algorithm. Since the algorithm involves a large number of parameters, estimating the optimal configuration is necessary. To accomplish this, we describe a low effort approach to estimate the parameter configuration based on simulated annealing. An evaluation of our system to detect hands shows that a repositioning precision of approximately 1 cm is achieved. This accuracy is sufficient to reliably realize interaction metaphors above a surface.