What you look at is what you get: eye movement-based interaction techniques
CHI '90 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Twenty years of eye typing: systems and design issues
ETRA '02 Proceedings of the 2002 symposium on Eye tracking research & applications
Designing attentive interfaces
ETRA '02 Proceedings of the 2002 symposium on Eye tracking research & applications
BT Technology Journal
Conversing with the user based on eye-gaze patterns
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Understanding the effect of life-like interface agents through users' eye movements
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
Attentive interfaces for users with disabilities: eye gaze for intention and uncertainty estimation
Universal Access in the Information Society - Special Issue: Communication by Gaze Interaction
Gaze, conversational agents and face-to-face communication
Speech Communication
Inference in Hidden Markov Models
Inference in Hidden Markov Models
Hi-index | 0.00 |
Eye movements can carry a rich set of information about someone's intentions. In the case of physically impaired people gaze can be the only communication channel they can use. People with severe disabilities are usually assisted by helpers during everyday life activity, which in time can lead to a development of an effective visual communication protocol between helper and disabled. This protocol allows them to communicate at some extent only by glancing one towards the other. Starting from this premise, we propose a new model of attentive user interface featured with some of the visual comprehension abilities of a human helper. The purpose of this user interface is to be able to identify user's intentions, and so to assist him/her in the process of achieving simple interaction goals (i.e. object selection, task selection). Implementation of this attentive interface is accomplished by way of statistical analysis of user's gaze data, based on a hidden Markov model.