Input Feature Selection by Mutual Information Based on Parzen Window
IEEE Transactions on Pattern Analysis and Machine Intelligence
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
IEEE Computational Intelligence Magazine
Rough set-based Neuro-Fuzzy System: Towards increasing interpretability without compromising accuracy
IEEE Transactions on Fuzzy Systems
Nearest neighbor pattern classification
IEEE Transactions on Information Theory
Using mutual information for selecting features in supervised neural net learning
IEEE Transactions on Neural Networks
Editorial: Brain decoding: Opportunities and challenges for pattern recognition
Pattern Recognition
Journal of Medical Systems
Mutual information-based method for selecting informative feature sets
Pattern Recognition
Common spatio-time-frequency patterns for motor imagery-based brain machine interfaces
Computational Intelligence and Neuroscience
Hi-index | 0.01 |
The common spatial pattern (CSP) algorithm is effective in decoding the spatial patterns of the corresponding neuronal activities from electroencephalogram (EEG) signal patterns in brain-computer interfaces (BCIs). However, its effectiveness depends on the subject-specific time segment relative to the visual cue and on the temporal frequency band that is often selected manually or heuristically. This paper presents a novel statistical method to automatically select the optimal subject-specific time segment and temporal frequency band based on the mutual information between the spatial-temporal patterns from the EEG signals and the corresponding neuronal activities. The proposed method comprises four progressive stages: multi-time segment and temporal frequency band-pass filtering, CSP spatial filtering, mutual information-based feature selection and naive Bayesian classification. The proposed mutual information-based selection of optimal spatial-temporal patterns and its one-versus-rest multi-class extension were evaluated on single-trial EEG from the BCI Competition IV Datasets IIb and IIa respectively. The results showed that the proposed method yielded relatively better session-to-session classification results compared against the best submission.