C4.5: programs for machine learning
C4.5: programs for machine learning
Learning in the presence of concept drift and hidden contexts
Machine Learning
Robust Classification for Imprecise Environments
Machine Learning
Exploiting Context When Learning to Classify
ECML '93 Proceedings of the European Conference on Machine Learning
The Case against Accuracy Estimation for Comparing Induction Algorithms
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Minimax Regret Classifier for Imprecise Class Distributions
The Journal of Machine Learning Research
Dataset Shift in Machine Learning
Dataset Shift in Machine Learning
Assessing the impact of changing environments on classifier performance
Canadian AI'08 Proceedings of the Canadian Society for computational studies of intelligence, 21st conference on Advances in artificial intelligence
Discriminative vs. generative classifiers for cost sensitive learning
AI'06 Proceedings of the 19th international conference on Advances in Artificial Intelligence: Canadian Society for Computational Studies of Intelligence
Interactive information systems: Toward perception based computing
Theoretical Computer Science
Particle swarm classification: A survey and positioning
Pattern Recognition
Hi-index | 0.00 |
In this paper, we test some of the most commonly used classifiers to identify which ones are the most robust to changing environments The environment may change over time due to some contextual or definitional changes The environment may change with location It would be surprising if the performance of common classifiers did not degrade with these changes The question, we address here, is whether or not some types of classifier are inherently more immune than others to these effects In this study, we simulate the changing of environment by reducing the influence on the class of the most significant attributes Based on our analysis, K-Nearest Neighbor and Artificial Neural Networks are the most robust learners, ensemble algorithms are somewhat robust, whereas Naive Bayes, Logistic Regression and particularly Decision Trees are the most affected.