Mutual information-based selection of optimal spatial-temporal patterns for single-trial EEG-based BCIs

  • Authors:
  • Kai Keng Ang;Zheng Yang Chin;Haihong Zhang;Cuntai Guan

  • Affiliations:
  • Institute for Infocomm Research, Agency for Science, Technology and Research (A*STAR), 1 Fusionopolis Way, #21-01 Connexis, Singapore 138632, Singapore;Institute for Infocomm Research, Agency for Science, Technology and Research (A*STAR), 1 Fusionopolis Way, #21-01 Connexis, Singapore 138632, Singapore;Institute for Infocomm Research, Agency for Science, Technology and Research (A*STAR), 1 Fusionopolis Way, #21-01 Connexis, Singapore 138632, Singapore;Institute for Infocomm Research, Agency for Science, Technology and Research (A*STAR), 1 Fusionopolis Way, #21-01 Connexis, Singapore 138632, Singapore

  • Venue:
  • Pattern Recognition
  • Year:
  • 2012

Quantified Score

Hi-index 0.01

Visualization

Abstract

The common spatial pattern (CSP) algorithm is effective in decoding the spatial patterns of the corresponding neuronal activities from electroencephalogram (EEG) signal patterns in brain-computer interfaces (BCIs). However, its effectiveness depends on the subject-specific time segment relative to the visual cue and on the temporal frequency band that is often selected manually or heuristically. This paper presents a novel statistical method to automatically select the optimal subject-specific time segment and temporal frequency band based on the mutual information between the spatial-temporal patterns from the EEG signals and the corresponding neuronal activities. The proposed method comprises four progressive stages: multi-time segment and temporal frequency band-pass filtering, CSP spatial filtering, mutual information-based feature selection and naive Bayesian classification. The proposed mutual information-based selection of optimal spatial-temporal patterns and its one-versus-rest multi-class extension were evaluated on single-trial EEG from the BCI Competition IV Datasets IIb and IIa respectively. The results showed that the proposed method yielded relatively better session-to-session classification results compared against the best submission.