Interacting at a distance: measuring the performance of laser pointers and other devices
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Wizard-of-Oz prototyping for co-operative interaction design of graphical user interfaces
Proceedings of the third Nordic conference on Human-computer interaction
Proceedings of the 17th annual ACM symposium on User interface software and technology
Distant freehand pointing and clicking on very large, high resolution displays
Proceedings of the 18th annual ACM symposium on User interface software and technology
Interacting with large displays from a distance with vision-tracked multi-finger gestural input
Proceedings of the 18th annual ACM symposium on User interface software and technology
Soap: a pointing device that works in mid-air
UIST '06 Proceedings of the 19th annual ACM symposium on User interface software and technology
A comparative study of interaction metaphors for large-scale displays
CHI '09 Extended Abstracts on Human Factors in Computing Systems
Mid-air pan-and-zoom on wall-sized displays
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Conveying interactivity at an interactive public information display
Proceedings of the 2nd ACM International Symposium on Pervasive Displays
Hi-index | 0.00 |
A key challenge for creating large interactive displays in public spaces is in the definition of ways for the user to interact that are effective and easy to learn. This paper presents the outcomes of user evaluation sessions designed to test a series of different gestures for people interacting with large displays in the public space. It is an initial step towards the broader goal of establishing a natural means for immersive interactions. The paper proposes a set of simple gestures for the execution of the basic actions of selecting and rearranging items in a large-scale dashboard. We performed a comparative analysis of the gestures, leading to a more in-depth understanding of the nature of spatial interaction between people and large public displays. More specifically, the analysis focuses on the scenarios when the interaction is restricted to an individual's own body, without any further assistance from associated devices. The findings converge into the elaboration of a model for assisting with the applicability of spatial gestures in response to both the context and the content they are applied to.