A new learning strategy for classification problems with different training and test distributions

  • Authors:
  • Óscar Pérez;Manuel Sánchez-Montañés

  • Affiliations:
  • Universidad Autónoma de Madrid, Escuela Politécnica Superior, Madrid, Spain;Universidad Autónoma de Madrid, Escuela Politécnica Superior, Madrid, Spain and Cognodata Consulting, Madrid, Spain

  • Venue:
  • IWANN'07 Proceedings of the 9th international work conference on Artificial neural networks
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Standard machine learning techniques assume that the statistical structure of the training and test datasets are the same (i.e. same attribute distribution p(x), and same class distribution p(c|x)). However, in real prediction problems this is not usually the case for different reasons. For example, the training set is not usually representative of the whole problem due to sample selection biases during its acquisition. In addition, the measurement biases in training could be different than in test (for example, when the measurement devices are different). Another reason is that in real prediction tasks the statistical structure of the classes is not usually static but evolves in time, and there is usually a time lag between training and test sets. Due to these different problems, the performance of a learning algorithm can severely degrade. Here we present a new learning strategy that constructs a classifier in two steps. First, the labeled examples of the training set are used for constructing a statistical model of the problem. In the second step, the model is improved using the unlabeled patterns of the test set by means of a novel extension of the Expectation-Maximization (EM) algorithm presented here. We show the convergence properties of the algorithm and illustrate its performance with an artificial problem. Finally we demonstrate its strengths in a heart disease diagnosis problem where the training set is taken from a different hospital than the test set.