Information Processing Letters
Inferring decision trees using the minimum description length principle
Information and Computation
Symbolic and Neural Learning Algorithms: An Experimental Comparison
Machine Learning
Proceedings of the first international conference on Principles of knowledge representation and reasoning
Designing Storage Efficient Decision Trees
IEEE Transactions on Computers
Computation and action under bounded resources
Computation and action under bounded resources
Minimizing conflicts: a heuristic repair method for constraint satisfaction and scheduling problems
Artificial Intelligence - Special volume on constraint-based reasoning
A Further Comparison of Splitting Rules for Decision-Tree Induction
Machine Learning
C4.5: programs for machine learning
C4.5: programs for machine learning
Deliberation scheduling for problem solving in time-constrained environments
Artificial Intelligence
Improving greedy algorithms by lookahead-search
Journal of Algorithms
The nature of statistical learning theory
The nature of statistical learning theory
Optimal composition of real-time systems
Artificial Intelligence
Machine Learning
An anytime approach to connectionist theory refinement: refining the topologies of knowledge-based neural networks
A Comparative Analysis of Methods for Pruning Decision Trees
IEEE Transactions on Pattern Analysis and Machine Intelligence
Decision Tree Induction Based on Efficient Tree Restructuring
Machine Learning
Machine Learning
Feature Generation Using General Constructor Functions
Machine Learning
Overcoming the Myopia of Inductive Learning Algorithms with RELIEFF
Applied Intelligence
Choosing Multiple Parameters for Support Vector Machines
Machine Learning
Incremental Induction of Decision Trees
Machine Learning
Breeding Decision Trees Using Evolutionary Techniques
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
The Effects of Training Set Size on Decision Tree Complexity
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
The Alternating Decision Tree Learning Algorithm
ICML '99 Proceedings of the Sixteenth International Conference on Machine Learning
A Brief Introduction to Boosting
IJCAI '99 Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence
Extracting comprehensible models from trained neural networks
Extracting comprehensible models from trained neural networks
Selective Sampling for Nearest Neighbor Classifiers
Machine Learning
Sequential skewing: an improved skewing algorithm
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Generalized skewing for functions with continuous and nominal attributes
ICML '05 Proceedings of the 22nd international conference on Machine learning
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
Further experimental evidence against the utility of Occam's razor
Journal of Artificial Intelligence Research
Occam's razor just got sharper
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Skewing: an efficient alternative to lookahead for decision tree induction
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
Oversearching and layered search in empirical learning
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
Lookahead and pathology in decision tree induction
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
Tabu search techniques for large high-school timetabling problems
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 1
Budgeted learning of nailve-bayes classifiers
UAI'03 Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence
Look-ahead based fuzzy decision tree induction
IEEE Transactions on Fuzzy Systems
Anytime induction of low-cost, low-error classifiers: a sampling-based approach
Journal of Artificial Intelligence Research
Learning in a Fixed or Evolving Network of Agents
WI-IAT '09 Proceedings of the 2009 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology - Volume 02
Minimising decision tree size as combinatorial optimisation
CP'09 Proceedings of the 15th international conference on Principles and practice of constraint programming
Optimal constraint-based decision tree induction from itemset lattices
Data Mining and Knowledge Discovery
The CASH algorithm-cost-sensitive attribute selection using histograms
Information Sciences: an International Journal
A novel node splitting criteria for decision trees based on theil index
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part II
Information Sciences: an International Journal
Hi-index | 0.00 |
The majority of existing algorithms for learning decision trees are greedy---a tree is induced top-down, making locally optimal decisions at each node. In most cases, however, the constructed tree is not globally optimal. Even the few non-greedy learners cannot learn good trees when the concept is difficult. Furthermore, they require a fixed amount of time and are not able to generate a better tree if additional time is available. We introduce a framework for anytime induction of decision trees that overcomes these problems by trading computation speed for better tree quality. Our proposed family of algorithms employs a novel strategy for evaluating candidate splits. A biased sampling of the space of consistent trees rooted at an attribute is used to estimate the size of the minimal tree under that attribute, and an attribute with the smallest expected tree is selected. We present two types of anytime induction algorithms: a contract algorithm that determines the sample size on the basis of a pre-given allocation of time, and an interruptible algorithm that starts with a greedy tree and continuously improves subtrees by additional sampling. Experimental results indicate that, for several hard concepts, our proposed approach exhibits good anytime behavior and yields significantly better decision trees when more time is available.