Discretisation in Lazy Learning Algorithms

  • Authors:
  • Kai Ming Ting

  • Affiliations:
  • Basser Department of Computer Science, University of Sydney, NSW 2006, Australia

  • Venue:
  • Artificial Intelligence Review - Special issue on lazy learning
  • Year:
  • 1997

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper adopts the idea of discretising continuous attributes (Fayyadand Irani 1993) and applies it to lazy learning algorithms (Aha 1990; Aha,Kibler and Albert 1991). This approach converts continuous attributes intonominal attributes at the outset. We investigate the effects of this approachon the performance of lazy learning algorithms and examine it empiricallyusing both real-world and artificial data to characterise the benefits ofdiscretisation in lazy learning algorithms. Specifically, we have showed thatdiscretisation achieves an effect of noise reduction and increases lazylearning algorithms‘ tolerance for irrelevant continuous attributes. The proposed approach constrains the representation space in lazy learningalgorithms to hyper-rectangular regions that are orthogonal to the attributeaxes. Our generally better results obtained using a more restrictedrepresentation language indicate that employing a powerful representationlanguage in a learning algorithm is not always the best choice as it can leadto a loss of accuracy.