Identifying and Handling Mislabelled Instances

  • Authors:
  • Fabrice Muhlenbach;Stéphane Lallich;Djamel A. Zighed

  • Affiliations:
  • ERIC Laboratory, Lumière University (Lyon 2), Bâtiment L, 5. Avenue Pierre Mendès-France, 69676 Bron Cedex, France. fmuhlenb@univ-lyon2.fr;ERIC Laboratory, Lumière University (Lyon 2), Bâtiment L, 5. Avenue Pierre Mendès-France, 69676 Bron Cedex, France. lallich@univ-lyon2.fr;ERIC Laboratory, Lumière University (Lyon 2), Bâtiment L, 5. Avenue Pierre Mendès-France, 69676 Bron Cedex, France. zighed@univ-lyon2.fr

  • Venue:
  • Journal of Intelligent Information Systems
  • Year:
  • 2004

Quantified Score

Hi-index 0.01

Visualization

Abstract

Data mining and knowledge discovery aim at producing useful and reliable models from the data. Unfortunately some databases contain noisy data which perturb the generalization of the models. An important source of noise consists of mislabelled training instances. We offer a new approach which deals with improving classification accuracies by using a preliminary filtering procedure. An example is suspect when in its neighbourhood defined by a geometrical graph the proportion of examples of the same class is not significantly greater than in the database itself. Such suspect examples in the training data can be removed or relabelled. The filtered training set is then provided as input to learning algorithms. Our experiments on ten benchmarks of UCI Machine Learning Repository using 1-NN as the final algorithm show that removal gives better results than relabelling. Removing allows maintaining the generalization error rate when we introduce from 0 to 20% of noise on the class, especially when classes are well separable. The filtering method proposed is finally compared to the relaxation relabelling schema.