SETI@home: an experiment in public-resource computing
Communications of the ACM
Tools and Environments for Parallel and Distributed Computing
Tools and Environments for Parallel and Distributed Computing
Labeling images with a computer game
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Computer
Using a low-cost electroencephalograph for task classification in HCI research
UIST '06 Proceedings of the 19th annual ACM symposium on User interface software and technology
International Journal of Computer Vision
Brain state decoding for rapid image retrieval
MM '09 Proceedings of the 17th ACM international conference on Multimedia
Computers in Human Behavior
On a NeuroIS design science model
DESRIST'11 Proceedings of the 6th international conference on Service-oriented perspectives in design science research
The brain as target image detector: the role of image category and presentation time
FAC'11 Proceedings of the 6th international conference on Foundations of augmented cognition: directing the future of adaptive systems
Detecting error-related negativity for interaction design
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
EEG analysis of implicit human visual perception
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Optimising the number of channels in EEG-augmented image search
BCS-HCI '11 Proceedings of the 25th BCS Conference on Human-Computer Interaction
Single-trial EEG classification of artifacts in videos
ACM Transactions on Applied Perception (TAP)
An analytical model for generalized ESP games
Knowledge-Based Systems
Locating user attention using eye tracking and EEG for spatio-temporal event selection
Proceedings of the 2013 international conference on Intelligent user interfaces
Hi-index | 0.01 |
In this paper, we present Human-Aided Computing, an approach that uses an electroencephalograph (EEG) device to measure the presence and outcomes of implicit cognitive processing, processing that users perform automatically and may not even be aware of. We describe a classification system and present results from two experiments as proof-of-concept. Results from the first experiment showed that our system could classify whether a user was looking at an image of a face or not, even when the user was not explicitly trying to make this determination. Results from the second experiment extended this to animals and inanimate object categories as well, suggesting generality beyond face recognition. We further show that we can improve classification accuracies if we show images multiple times, potentially to multiple people, attaining well above 90% classification accuracies with even just ten presentations.