Controlled permutations for testing adaptive classifiers

  • Authors:
  • Indre Žliobaite

  • Affiliations:
  • Smart Technology Research Center, Bournemouth University, Poole, UK

  • Venue:
  • DS'11 Proceedings of the 14th international conference on Discovery science
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

We study evaluation of online classifiers that are designed to adapt to changes in data distribution over time (concept drift). A standard procedure to evaluate such classifiers is the test-then-train, which iteratively uses the incoming instances for testing and then for updating a classifier. Comparing classifiers based on such a test risks to give biased results, since a dataset is processed only once in a fixed sequential order. Such a test concludes how well classifiers adapt when changes happen at fixed time points, while the ultimate goal is to assess how well they would adapt when changes of a similar type happen unexpectedly. To reduce the risk of biased evaluation we propose to run multiple tests with permuted data. A random permutation is not suitable, as it makes the data distribution uniform over time and destroys the adaptive learning problem. We develop three permutation techniques with theoretical control mechanisms that ensure that different distributions in data are preserved while perturbing the data order. The idea is to manipulate blocks of data keeping individual instances close together. Our permutations reduce the risk of biased evaluation by making it possible to analyze sensitivity of classifiers to variations in the data order.