Introduction to statistical pattern recognition (2nd ed.)
Introduction to statistical pattern recognition (2nd ed.)
Mining time-changing data streams
Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining
Incremental rule learning based on example nearness from numerical data streams
Proceedings of the 2005 ACM symposium on Applied computing
A framework for generating data to simulate changing environments
AIAP'07 Proceedings of the 25th conference on Proceedings of the 25th IASTED International Multi-Conference: artificial intelligence and applications
Covariate Shift Adaptation by Importance Weighted Cross Validation
The Journal of Machine Learning Research
Dynamic Weighted Majority: An Ensemble Method for Drifting Concepts
The Journal of Machine Learning Research
Exponentially weighted moving average charts for detecting concept drift
Pattern Recognition Letters
Solving Nonstationary Classification Problems With Coupled Support Vector Machines
IEEE Transactions on Neural Networks
Incremental Learning of Concept Drift in Nonstationary Environments
IEEE Transactions on Neural Networks
Machine-learning-based coadaptive calibration for brain-computer interfaces
Neural Computation
Hi-index | 0.10 |
Electroencephalogram signals used to control brain-computer interfaces (BCIs) are nonstationary, a problem that makes classification of mental tasks difficult in real-time. Event-related potentials associated with BCI errors have the potential to be used as online labels for adaptation of BCI classifiers; however, detection of event-related potentials is imperfect, which makes this a partially supervised classification problem. In this study, two linear binary classifiers are adapted using uncertain labels on artificial data sets representing various scenarios of concept drift as well as on a real motor imagery BCI data set. Both perfectly and imperfectly simulated labels are incorporated into the classifiers which are adapted in the following two ways: (i) only after trials where BCI mistakes were detected and (ii) after every trial regardless of whether or not an error was detected. We find that all data sets benefit from adaptation using imperfect labels and that adapting after all trials results in better performance than adapting only after detected errors, especially when the labels are imperfect and the classes are inseparable.