DiamondTouch: a multi-user touch technology
Proceedings of the 14th annual ACM symposium on User interface software and technology
SmartSkin: an infrastructure for freehand manipulation on interactive surfaces
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
The Interactive Workspaces Project: Experiences with Ubiquitous Computing Rooms
IEEE Pervasive Computing
TouchLight: an imaging touch screen and display for gesture-based interaction
Proceedings of the 6th international conference on Multimodal interfaces
Visual touchpad: a two-handed gestural input device
Proceedings of the 6th international conference on Multimodal interfaces
Visual tracking of bare fingers for interactive surfaces
Proceedings of the 17th annual ACM symposium on User interface software and technology
A comparison of techniques for multi-display reaching
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
PlayAnywhere: a compact interactive tabletop projection-vision system
Proceedings of the 18th annual ACM symposium on User interface software and technology
Low-cost multi-touch sensing through frustrated total internal reflection
Proceedings of the 18th annual ACM symposium on User interface software and technology
WUW - wear Ur world: a wearable gestural interface
CHI '09 Extended Abstracts on Human Factors in Computing Systems
Fast Invariant Contour-Based Classification of Hand Symbols for HCI
CAIP '09 Proceedings of the 13th International Conference on Computer Analysis of Images and Patterns
Culturally based design: embodying trans-surface interaction in rummy
Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Proceedings of the 4th ACM SIGCHI symposium on Engineering interactive computing systems
Hi-index | 0.00 |
Many of the interactive environments surrounding us today consist of multiple mobile and/or stationary visual displays. However, interaction with such multi-display environments is still dominated by the personal computer paradigm - one user interacts with one single display at a time. In this paper first we present a new video-based input device called Airlift, which captures hands and fingertips independent from any display and therefore allows for consistent interaction across display boundaries. Second, we propose a system architecture for interaction spanning multiple displays. Third, we start to explore this new design space by proposing and evaluating a new interaction technique Lift-and-Drop for copying data from one display to another. According to the results of our study for the task considered the new technique is superior to other techniques based on traditional direct input devices which are limited to the surface of single displays like pen or touch.