The use of eye movements in human-computer interaction techniques: what you look at is what you get
ACM Transactions on Information Systems (TOIS) - Special issue on computer—human interaction
The reading assistant: eye gaze triggered auditory prompting for reading remediation
UIST '00 Proceedings of the 13th annual ACM symposium on User interface software and technology
Development of an Eye-Movement Enhanced Translation Support System
APCHI '98 Proceedings of the Third Asian Pacific Computer and Human Interaction
Proceedings of the 2004 symposium on Eye tracking research & applications
WebGazeAnalyzer: a system for capturing and analyzing web reading behavior using eye gaze
CHI '05 Extended Abstracts on Human Factors in Computing Systems
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
ICLS'08 Proceedings of the 8th international conference on International conference for the learning sciences - Volume 2
AutoSelect: What You Want Is What You Get: Real-Time Processing of Visual Attention and Affect
PIT'06 Proceedings of the 2006 international tutorial and research conference on Perception and Interactive Technologies
Virtual gazing in video surveillance
Proceedings of the 2010 ACM workshop on Surreal media and virtual cloning
ITS'12 Proceedings of the 11th international conference on Intelligent Tutoring Systems
Proceedings of the Third International Conference on Learning Analytics and Knowledge
Hi-index | 0.00 |
In this paper, we analyze complex gaze tracking data in a collaborative task and apply machine learning models to automatically predict skill-level differences between participants. Specifically, we present findings that address the two primary challenges for this prediction task: (1) extracting meaningful features from the gaze information, and (2) casting the prediction task as a machine learning (ML) problem. The results show that our approach based on profile hidden Markov models are up to 96% accurate and can make the determination as fast as one minute into the collaboration, with only 5% of gaze observations registered. We also provide a qualitative analysis of gaze patterns that reveal the relative expertise level of the paired users in a collaborative learning user study.