Classification of EEG for Affect Recognition: An Adaptive Approach
AI '09 Proceedings of the 22nd Australasian Joint Conference on Advances in Artificial Intelligence
Estimating class proportions in boar semen analysis using the hellinger distance
IEA/AIE'10 Proceedings of the 23rd international conference on Industrial engineering and other applications of applied intelligent systems - Volume Part I
HAIS'11 Proceedings of the 6th international conference on Hybrid artificial intelligent systems - Volume Part I
A unifying view on dataset shift in classification
Pattern Recognition
Expert Systems with Applications: An International Journal
Learning in non-stationary environments with class imbalance
Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining
Class distribution estimation based on the Hellinger distance
Information Sciences: an International Journal
Information Sciences: an International Journal
Hi-index | 0.00 |
Classifier error is the product of model bias and data variance. While understanding the bias involved when selecting a given learning algorithm, it is similarly important to understand the variability in data over time, since even the One True Model might perform poorly when training and evaluation samples diverge. Thus, it becomes the ability to identify distributional divergence is critical towards pinpointing when fracture points in classifier performance will occur, particularly since contemporary methods such as tenfolds and hold-out are poor predictors in divergent circumstances. This article implement a comprehensive evaluation framework to proactively detect breakpoints in classifiers’ predictions and shifts in data distributions through a series of statistical tests. We outline and utilize three scenarios under which data changes: sample selection bias, covariate shift, and shifting class priors. We evaluate the framework with a variety of classifiers and datasets.