IEICE - Transactions on Information and Systems
ICPR '10 Proceedings of the 2010 20th International Conference on Pattern Recognition
FAC'11 Proceedings of the 6th international conference on Foundations of augmented cognition: directing the future of adaptive systems
Using mutual information for selecting features in supervised neural net learning
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
In Brain-Computer Interface (BCI) research, subject and session specific training data is usually used to ensure satisfying classification results. In this paper, we show that neural responses to different speaking tasks recorded with functional Near Infrared spectroscopy (fNIRS) are consistent enough across speakers to robustly classify speaking modes with models trained exclusively on other subjects. Our study thereby suggests that future fNIRS-based BCIs can be designed without time-consuming training, which, besides being cumbersome, might be impossible for users with disabilities. Accuracies of 71% and 61% were achieved in distinguishing segments containing overt speech and silent speech from segments in which subjects were not speaking, without using any of the subject's data for training. To rule out artifact contamination, we filtered the data rigorously. To the best of our knowledge, there are no previous studies showing the zero training capability of fNIRS based BCIs.