Learning to Decode Cognitive States from Brain Images
Machine Learning
Detection of cognitive states from fMRI data using machine learning techniques
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Assessing NeuroSky's Usability to Detect Attention Levels in an Assessment Exercise
Proceedings of the 13th International Conference on Human-Computer Interaction. Part I: New Trends
WCCI'12 Proceedings of the 2012 World Congress conference on Advances in Computational Intelligence
NeuCube evospike architecture for spatio-temporal modelling and pattern recognition of brain signals
ANNPR'12 Proceedings of the 5th INNS IAPR TC 3 GIRPR conference on Artificial Neural Networks in Pattern Recognition
Hi-index | 0.00 |
Over the last few years, functional Magnetic Resonance Imaging (fMRI) has emerged as a new and powerful method to map the cognitive states of a human subject to specific functional areas of the subject brain. Although fMRI has been widely used to determine average activation in different brain regions, the problem of automatically decoding the cognitive state from instantaneous brain activations has received little attention. In this paper, we study this prediction problem on a complex time-series dataset that relates fMRI data (brain images) with the corresponding cognitive states of the subjects while watching three 20 minute movies. This work describes the process we used to reduce the extremely high-dimensional feature space and a comparison of the models used for prediction. To solve the prediction task we explored a standard linear model frequently used by neuroscientists, as well as a k-nearest neighbor model, that now are the state-of-art in this area. Finally, we provide experimental evidence that non-linear models such as multi-layer perceptron and especially recurrent neural networks are significantly better.