Growing Gaussian mixtures network for classification applications
Signal Processing
Robust Classification for Imprecise Environments
Machine Learning
Minimax Regret Classifier for Imprecise Class Distributions
The Journal of Machine Learning Research
Asymptotic Bayesian generalization error when training and test distributions are different
Proceedings of the 24th international conference on Machine learning
A model selection algorithm for a posteriori probability estimation with neural networks
IEEE Transactions on Neural Networks
A unifying view on dataset shift in classification
Pattern Recognition
Expert Systems with Applications: An International Journal
Advances in Artificial Intelligence
Hi-index | 0.01 |
The fundamental assumption that training and operational data come from the same probability distribution, which is the basis of most learning algorithms, is often not satisfied in practice. Several algorithms have been proposed to cope with classification problems where the class priors may change after training, but they can show a poor performance when the class conditional data densities also change. In this paper, we propose a re-estimation algorithm that makes use of unlabeled operational data to adapt the classifier behavior to changing scenarios. We assume that (a) the classes may be decomposed in several (unknown) subclasses, and (b) the prior subclass probabilities may change after training. Experimental results with practical applications show an improvement over an adaptive method based on class priors, while preserving a similar performance when there are no subclass changes.