The Difficulty of Reduced Error Pruning of Leveled Branching Programs

  • Authors:
  • Tapio Elomaa;Matti Kääriäinen

  • Affiliations:
  • Department of Computer Science, P.O. Box 26, FIN-00014 University of Helsinki, Finland E-mail: elomaa@cs.helsinki.fi;Department of Computer Science, P.O. Box 26, FIN-00014 University of Helsinki, Finland E-mail: mtkaaria@cs.helsinki.fi

  • Venue:
  • Annals of Mathematics and Artificial Intelligence
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

Induction of decision trees is one of the most successful approaches to supervised machine learning. Branching programs are a generalization of decision trees and, by the boosting analysis, exponentially more efficiently learnable than decision trees. However, this advantage has not been seen to materialize in experiments. Decision trees are easy to simplify using pruning. Reduced error pruning is one of the simplest decision tree pruning algorithms. For branching programs no pruning algorithms are known. In this paper we prove that reduced error pruning of branching programs is infeasible. Finding the optimal pruning of a branching program with respect to a set of pruning examples that is separate from the set of training examples is NP-complete. Because of this intractability result, we have to consider approximating reduced error pruning. Unfortunately, it turns out that even finding an approximate solution of arbitrary accuracy is computationally infeasible. In particular, reduced error pruning of branching programs is APX-hard. Our experiments show that, despite the negative theoretical results, heuristic pruning of branching programs can reduce their size without significantly altering the accuracy.