An empirical comparison of pie vs. linear menus
CHI '88 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
What you look at is what you get: eye movement-based interaction techniques
CHI '90 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Manual and gaze input cascaded (MAGIC) pointing
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Features of Eye Gaze Interface for Selection Tasks
APCHI '98 Proceedings of the Third Asian Pacific Computer and Human Interaction
Where is "it"? Event Synchronization in Gaze-Speech Input Systems
Proceedings of the 5th international conference on Multimodal interfaces
Considering the direction of cursor movement for efficient traversal of cascading menus
Proceedings of the 16th annual ACM symposium on User interface software and technology
A widget library for gaze-based interaction elements
Proceedings of the 2006 symposium on Eye tracking research & applications
Gaze-based web search: the impact of interface design on search result selection
Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications
Multimodal, touchless interaction in spatial augmented reality environments
ICDHM'11 Proceedings of the Third international conference on Digital human modeling
Hi-index | 0.00 |
In this paper a study is reported, which investigates the effectiveness of two approaches to improving gaze-based interaction for realistic and complex menu selection tasks. The first approach focuses on identifying menu designs for hierarchical menus that are particularly suitable for gaze-based interaction, whereas the second approach is based on the idea of combining gaze-based interaction with speech as a second input modality. In an experiment with 40 participants the impact of menu design, input device, and navigation complexity on accuracy and completion time in a menu selection task as well as on user satisfaction were investigated. The results concerning both objective task performance and subjective ratings confirmed our expectations in that a semi-circle menu was better suited for gaze-based menu selection than either a linear or a full-circle menu. Contrary to our expectations, an input device solely based on eye gazes turned out to be superior to the combined gaze- and speech-based device. Moreover, the drawbacks of a less suitable menu design (i.e., of a linear menu or a full-circle menu) as well as of the multimodal input device particularly obstructed performance in the case of more complex navigational tasks.