Learning decision rules in noisy domains
Proceedings of Expert Systems '86, The 6Th Annual Technical Conference on Research and development in expert systems III
International Journal of Man-Machine Studies - Special Issue: Knowledge Acquisition for Knowledge-based Systems. Part 5
Employing linear regression in regression tree leaves
ECAI '92 Proceedings of the 10th European conference on Artificial intelligence
Trading Accuracy for Simplicity in Decision Trees
Machine Learning
An efficient algorithm for optimal pruning of decision trees
Artificial Intelligence
A rule induction approach for determining the number of kanbans in a just-in-time production system
Computers and Industrial Engineering
Generating consensus priority point vectors: a logarithmic goal programming approach
Computers and Operations Research
Efficient algorithms for constructing decision trees with constraints
Proceedings of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining
Dealing with the Expert Inconsistency in Probability Elicitation
IEEE Transactions on Knowledge and Data Engineering
The Case against Accuracy Estimation for Comparing Induction Algorithms
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
A Dynamic Programming Based Pruning Method for Decision Trees
INFORMS Journal on Computing
Selection of web sites for online advertising using the AHP
Information and Management
A common framework for deriving preference values from pairwise comparison matrices
Computers and Operations Research
Evaluation of decision trees: a multi-criteria approach
Computers and Operations Research
Data Mining: Concepts and Techniques
Data Mining: Concepts and Techniques
Expert Systems with Applications: An International Journal
Models for representing piecewise linear cost functions
Operations Research Letters
Expert Systems with Applications: An International Journal
Hi-index | 12.05 |
The regression tree (RT) induction process has two major phases: the growth phase and the pruning phase. The pruning phase aims to generalize the RT that was generated in the growth phase by generating a subtree that avoids over-fitting to the training data. Most post-pruning methods essentially address post-pruning as if it were a single objective problem (i.e., maximize validation accuracy), and address the issue of simplicity (in terms of the number of leaves) only in the case of a tie. However, it is well known that apart from accuracy there are other performance measures (e.g., stability, simplicity) that are important for evaluating DT quality. In this paper we present an integrated approach to post-pruning phase that simultaneously accommodates multiple performance measures that are important for evaluating RT quality, and obtains the optimal subtree based on user provided preference and value function information.