Of moles and men: the design of foot controls for workstations
CHI '86 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Appropriateness of foot interaction for non-accurate spatial tasks
CHI '04 Extended Abstracts on Human Factors in Computing Systems
A Fitts Law comparison of eye tracking and manual input in the selection of visual targets
ICMI '08 Proceedings of the 10th international conference on Multimodal interfaces
Using hands and feet to navigate and manipulate spatial data
CHI '09 Extended Abstracts on Human Factors in Computing Systems
Whole Body Interaction with Geospatial Data
SG '09 Proceedings of the 10th International Symposium on Smart Graphics
Designing gaze-supported multimodal interactions for the exploration of large image collections
Proceedings of the 1st Conference on Novel Gaze-Controlled Applications
Investigating gaze-supported multimodal pan and zoom
Proceedings of the Symposium on Eye Tracking Research and Applications
Hi-index | 0.00 |
When working with zoomable information spaces, we can distinguish complex tasks into primary and secondary tasks (e.g., pan and zoom). In this context, a multimodal combination of gaze and foot input is highly promising for supporting manual interactions, for example, using mouse and keyboard. Motivated by this, we present several alternatives for multimodal gaze-supported foot interaction in a computer desktop setup for pan and zoom. While our eye gaze is ideal to indicate a user's current point of interest and where to zoom in, foot interaction is well suited for parallel input controls, for example, to specify the zooming speed. Our investigation focuses on varied foot input devices differing in their degree of freedom (e.g., one- and two-directional foot pedals) that can be seamlessly combined with gaze input.