The Visual Hull Concept for Silhouette-Based Image Understanding
IEEE Transactions on Pattern Analysis and Machine Intelligence
Pointing gesture recognition based on 3D-tracking of face, hands and head orientation
Proceedings of the 5th international conference on Multimodal interfaces
Low-cost multi-touch sensing through frustrated total internal reflection
Proceedings of the 18th annual ACM symposium on User interface software and technology
uPen: A Smart Pen-liked Device for Facilitating Interaction on Large Displays
TABLETOP '06 Proceedings of the First IEEE International Workshop on Horizontal Interactive Human-Computer Systems
Integrating Point and Touch for Interaction with Digital Tabletop Displays
IEEE Computer Graphics and Applications
Shadow reaching: a new perspective on interaction for large displays
Proceedings of the 20th annual ACM symposium on User interface software and technology
It's Mine, Don't Touch!: interactions at a large multi-touch display in a city centre
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
3D user interface combining gaze and hand gestures for large-scale display
CHI '10 Extended Abstracts on Human Factors in Computing Systems
Towards high-level human activity recognition through computer vision and temporal logic
KI'10 Proceedings of the 33rd annual German conference on Advances in artificial intelligence
International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction
3D remote interface for smart displays
CHI '11 Extended Abstracts on Human Factors in Computing Systems
Proceedings of the 1st Conference on Novel Gaze-Controlled Applications
INTERACT'11 Proceedings of the 13th IFIP TC 13 international conference on Human-computer interaction - Volume Part II
Proceedings of the 23rd Australian Computer-Human Interaction Conference
Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces
A comparative study on distant free-hand pointing
Proceedings of the 10th European conference on Interactive tv and video
Vision-based handwriting recognition for unrestricted text input in mid-air
Proceedings of the 14th ACM international conference on Multimodal interaction
GlueTK: a framework for multi-modal, multi-display human-machine-interaction
Proceedings of the 2013 international conference on Intelligent user interfaces
Hi-index | 0.00 |
Touch is a very intuitive modality for interacting with objects displayed on arbitrary surfaces. However, when using touch for large-scale surfaces, not every point is reachable. Therefore, an extension is required that keeps the intuitivity provided by touch: pointing. We will present our system that allows both input modalities in one single framework. Our method is based on 3D reconstruction, using standard RGB cameras only, and allows seamless switching between touch and pointing, even while interacting. Our approach scales very well with large surfaces without modifying them. We present a technical evaluation of the system's accuracy, as well as a user study. We found that users preferred our system to a touch-only system, because they had more freedom during interaction and could solve the presented task significantly faster.