What you look at is what you get: eye movement-based interaction techniques
CHI '90 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
New technological windows into mind: there is more in eyes and brains for human-computer interaction
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Manual and gaze input cascaded (MAGIC) pointing
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Inferring intent in eye-based interfaces: tracing eye movements with process models
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
The GAZE groupware system: mediating joint attention in multiparty communication and collaboration
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Intelligent gaze-added interfaces
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Evaluation of eye gaze interaction
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Button selection for general GUIs using eye and hand together
AVI '00 Proceedings of the working conference on Advanced visual interfaces
Features of Eye Gaze Interface for Selection Tasks
APCHI '98 Proceedings of the Third Asian Pacific Computer and Human Interaction
Using eye tracking for interaction
CHI '11 Extended Abstracts on Human Factors in Computing Systems
Comparison of gaze-to-objects mapping algorithms
Proceedings of the 1st Conference on Novel Gaze-Controlled Applications
Look & touch: gaze-supported target acquisition
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Hi-index | 0.00 |
This paper examines three gaze-added methods, the Auto, Manual, and SemiAuto, that have a potential to increase the efficiency of target selection operations in general GUI environments such as MS-Windows and Mac-OS. These three methods employ the human's eye gaze and a hand (mouse operation) together to enable users to select a target even if the jittery motions of an eye and the measurement error of an eye-tracking device have occurred. The result of the experiment under an environment in which small targets are placed in a narrow layout showed that the operational efficiency with the SemiAuto method was the best among three methods; and, the SemiAuto method was the same or faster than the mouse-only operation without increasing errors greatly. Especially concerning the discontinuous selection situation (a target selection whose cursor position is randomly located), the operation with the SemiAuto method was about 31% faster than the mouse-only operation.