The active badge location system
ACM Transactions on Information Systems (TOIS)
Understanding and Using Context
Personal and Ubiquitous Computing
Eye contact sensing glasses for attention-sensitive wearable video blogging
CHI '04 Extended Abstracts on Human Factors in Computing Systems
IEEE Transactions on Pattern Analysis and Machine Intelligence
Full-time wearable headphone-type gaze detector
CHI '06 Extended Abstracts on Human Factors in Computing Systems
Eye Tracking Methodology: Theory and Practice
Eye Tracking Methodology: Theory and Practice
A gaze-based study for investigating the perception of visual realism in simulated scenes
ACM Transactions on Applied Perception (TAP)
IEEE Pervasive Computing
Rapid Prototyping of Activity Recognition Applications
IEEE Pervasive Computing
Automated eye-movement protocol analysis
Human-Computer Interaction
Wearable EOG goggles: Seamless sensing and context-awareness in everyday environments
Journal of Ambient Intelligence and Smart Environments
Eye Movement Analysis for Activity Recognition Using Electrooculography
IEEE Transactions on Pattern Analysis and Machine Intelligence
Multimodal recognition of reading activity in transit using body-worn sensors
ACM Transactions on Applied Perception (TAP)
Out of the lab and into the fray: towards modeling emotion in everyday life
Pervasive'10 Proceedings of the 8th international conference on Pervasive Computing
Proceedings of the 2012 ACM Conference on Ubiquitous Computing
Detecting eye contact using wearable eye-tracking glasses
Proceedings of the 2012 ACM Conference on Ubiquitous Computing
My reading life: towards utilizing eyetracking on unmodified tablets and phones
Proceedings of the 2013 ACM conference on Pervasive and ubiquitous computing adjunct publication
I know what you are reading: recognition of document types using mobile eye tracking
Proceedings of the 2013 International Symposium on Wearable Computers
A tutorial on human activity recognition using body-worn inertial sensors
ACM Computing Surveys (CSUR)
Hi-index | 0.00 |
Physical activity, location, as well as a person's psychophysiological and affective state are common dimensions for developing context-aware systems in ubiquitous computing. An important yet missing contextual dimension is the cognitive context that comprises all aspects related to mental information processing, such as perception, memory, knowledge, or learning. In this work we investigate the feasibility of recognising visual memory recall. We use a recognition methodology that combines minimum redundancy maximum relevance feature selection (mRMR) with a support vector machine (SVM) classifier. We validate the methodology in a dual user study with a total of fourteen participants looking at familiar and unfamiliar pictures from four picture categories: abstract, landscapes, faces, and buildings. Using person-independent training, we are able to discriminate between familiar and unfamiliar abstract pictures with a top recognition rate of 84.3% (89.3% recall, 21.0% false positive rate) over all participants. We show that eye movement analysis is a promising approach to infer the cognitive context of a person and discuss the key challenges for the real-world implementation of eye-based cognition-aware systems.