A Further Comparison of Splitting Rules for Decision-Tree Induction

  • Authors:
  • Wray Buntine;Tim Niblett

  • Affiliations:
  • The Turing Institute, George House, 36 North Hanover St., Glasgow, G1 2AD, U.K. Current address: Research Institute for Advanced Computer Science and Artificial Intelligence Research Branch ...;The Turing Institute, George House, 36 North Hanover St., Glasgow, G1 2AD, U.K. TIM@TURING.AC.UK

  • Venue:
  • Machine Learning
  • Year:
  • 1992

Quantified Score

Hi-index 0.00

Visualization

Abstract

One approach to learning classification rules from examples is to build decision trees. A review and comparison paper by Mingers (Mingers, 1989) looked at the first stage of tree building, which uses a “splitting rule” to grow trees with a greedy recursive partitioning algorithm. That paper considered a number of different measures and experimentally examined their behavior on four domains. The main conclusion was that a random splitting rule does not significantly decrease classificational accuracy. This note suggests an alternative experimental method and presents additional results on further domains. Our results indicate that random splitting leads to increased error. These results are at variance with those presented by Mingers.