Evaluation of eye gaze interaction
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Identifying fixations and saccades in eye-tracking protocols
ETRA '00 Proceedings of the 2000 symposium on Eye tracking research & applications
Human-aided computing: utilizing implicit human processing to classify images
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
An eye on input: research challenges in using the eye for computer input control
Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications
Evaluating eye tracking with ISO 9241 - part 9
HCI'07 Proceedings of the 12th international conference on Human-computer interaction: intelligent multimodal interaction environments
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
Tagging-by-search: automatic image region labeling using gaze information obtained from image search
Proceedings of the 19th international conference on Intelligent User Interfaces
Hi-index | 0.00 |
In expert video analysis, the selection of certain events in a continuous video stream is a frequently occurring operation, e.g., in surveillance applications. Due to the dynamic and rich visual input, the constantly high attention and the required hand-eye coordination for mouse interaction, this is a very demanding and exhausting task. Hence, relevant events might be missed. We propose to use eye tracking and electroencephalography (EEG) as additional input modalities for event selection. From eye tracking, we derive the spatial location of a perceived event and from patterns in the EEG signal we derive its temporal location within the video stream. This reduces the amount of the required active user input in the selection process, and thus has the potential to reduce the user's workload. In this paper, we describe the employed methods for the localization processes and introduce the developed scenario in which we investigate the feasibility of this approach. Finally, we present and discuss results on the accuracy and the speed of the method and investigate how the modalities interact.