Cross-subject classification of speaking modes using fNIRS

  • Authors:
  • Christian Herff;Dominic Heger;Felix Putze;Cuntai Guan;Tanja Schultz

  • Affiliations:
  • Cognitive Systems Lab (CSL), Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany;Cognitive Systems Lab (CSL), Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany;Cognitive Systems Lab (CSL), Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany;Institute for Infocomm Research, Agency for Science, Technology and Research (A*STAR), Singapore;Cognitive Systems Lab (CSL), Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany

  • Venue:
  • ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part II
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

In Brain-Computer Interface (BCI) research, subject and session specific training data is usually used to ensure satisfying classification results. In this paper, we show that neural responses to different speaking tasks recorded with functional Near Infrared spectroscopy (fNIRS) are consistent enough across speakers to robustly classify speaking modes with models trained exclusively on other subjects. Our study thereby suggests that future fNIRS-based BCIs can be designed without time-consuming training, which, besides being cumbersome, might be impossible for users with disabilities. Accuracies of 71% and 61% were achieved in distinguishing segments containing overt speech and silent speech from segments in which subjects were not speaking, without using any of the subject's data for training. To rule out artifact contamination, we filtered the data rigorously. To the best of our knowledge, there are no previous studies showing the zero training capability of fNIRS based BCIs.