What you look at is what you get: eye movement-based interaction techniques
CHI '90 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Information seeking in electronic environments
Information seeking in electronic environments
An evaluation of an eye tracker as a device for computer input2
CHI '87 Proceedings of the SIGCHI/GI Conference on Human Factors in Computing Systems and Graphics Interface
Manual and gaze input cascaded (MAGIC) pointing
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Inferring intent in eye-based interfaces: tracing eye movements with process models
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Understanding user goals in web search
Proceedings of the 13th international conference on World Wide Web
Vision-based hand pose estimation: A review
Computer Vision and Image Understanding
Multimodal human-computer interaction: A survey
Computer Vision and Image Understanding
Determining the informational, navigational, and transactional intent of Web queries
Information Processing and Management: an International Journal
Automated eye-movement protocol analysis
Human-Computer Interaction
Eye Movement Analysis for Activity Recognition Using Electrooculography
IEEE Transactions on Pattern Analysis and Machine Intelligence
Social interactions in HRI: the robot view
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
Hi-index | 0.01 |
To develop an efficient nonverbal human computer interaction system it is important to interpret the user's implicit intention, which is vague. According to cognitive visuo-motor theory, the human eye movements are a rich source of information about the human intention and behavior. According to Beatty's study, a task-evoked pupillary response is a consistent index of the human cognitive load and attention. In this paper, we propose a novel approach for a human's implicit intention recognition based on the eyeball movement pattern and pupil size variation. Based on the Bernard's research, we classify the human's implicit intention during a visual stimulus as informational and navigational intent. In the present study, the navigational intent refers to the human's idea to find some interesting objects in a visual input without a particular goal while the informational intent refers to the human's aspiration to find a particular object of interest. The proposed model utilizes the salient features of the eye such as fixation length, fixation count and pupil size variation as the inputs to classify the human's implicit intention. The experimental results show that the proposed model can achieve plausible recognition performance.