Learning in the presence of concept drift and hidden contexts
Machine Learning
On the Optimality of the Simple Bayesian Classifier under Zero-One Loss
Machine Learning - Special issue on learning with probabilistic representations
Mining time-changing data streams
Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining
A streaming ensemble algorithm (SEA) for large-scale classification
Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining
Incremental Learning from Noisy Data
Machine Learning
Mining concept-drifting data streams using ensemble classifiers
Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining
Incremental learning with partial instance memory
Artificial Intelligence
Knowledge maintenance on data streams with concept drifting
CIS'04 Proceedings of the First international conference on Computational and Information Science
Learn++: an incremental learning algorithm for supervised neuralnetworks
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
Hi-index | 0.00 |
Recently, the importance of incremental learning in changing environments has been acknowledged. This paper proposes a new ensemble learning method based on two level hypothesis tests for incremental learning in concept changing environments. We analyze the classification error as a stochastic variable, and introduce hypothesis test as mechanism for adaptively selecting classifiers. Hypothesis tests are used to distinguish between useful and useless individual classifiers and to identify classifier to be updated. Classifiers deemed as useful by the hypothesis test are integrated to form the final prediction. Experiments with simulated concept changing scenarios show that the proposed method could adaptively choose proper classifiers and adapt quickly to different concept changes to maintain its performance level.