Information Processing Letters
Symbolic and Neural Learning Algorithms: An Experimental Comparison
Machine Learning
Proceedings of the first international conference on Principles of knowledge representation and reasoning
Designing Storage Efficient Decision Trees
IEEE Transactions on Computers
Computation and action under bounded resources
Computation and action under bounded resources
C4.5: programs for machine learning
C4.5: programs for machine learning
Deliberation scheduling for problem solving in time-constrained environments
Artificial Intelligence
Optimal composition of real-time systems
Artificial Intelligence
Machine Learning
An anytime approach to connectionist theory refinement: refining the topologies of knowledge-based neural networks
Decision Tree Induction Based on Efficient Tree Restructuring
Machine Learning
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
Incremental Induction of Decision Trees
Machine Learning
Machine Learning
Breeding Decision Trees Using Evolutionary Techniques
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
The Alternating Decision Tree Learning Algorithm
ICML '99 Proceedings of the Sixteenth International Conference on Machine Learning
A Brief Introduction to Boosting
IJCAI '99 Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence
Extracting comprehensible models from trained neural networks
Extracting comprehensible models from trained neural networks
Automatic Recognition of Regions of Intrinsically Poor Multiple Alignment Using Machine Learning
CSB '03 Proceedings of the IEEE Computer Society Conference on Bioinformatics
Comparing Naive Bayes, Decision Trees, and SVM with AUC and Accuracy
ICDM '03 Proceedings of the Third IEEE International Conference on Data Mining
Is random model better? On its accuracy and efficiency
ICDM '03 Proceedings of the Third IEEE International Conference on Data Mining
Lookahead-based algorithms for anytime induction of decision trees
ICML '04 Proceedings of the twenty-first international conference on Machine learning
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Budgeted learning of nailve-bayes classifiers
UAI'03 Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence
Indexing density models for incremental learning and anytime classification on data streams
Proceedings of the 12th International Conference on Extending Database Technology: Advances in Database Technology
AnyOut: anytime outlier detection on streaming data
DASFAA'12 Proceedings of the 17th international conference on Database Systems for Advanced Applications - Volume Part I
Hi-index | 0.00 |
Finding a minimal decision tree consistent with the examples is an NP-complete problem. Therefore, most of the existing algorithms for decision tree induction use a greedy approach based on local heuristics. These algorithms usually require a fixed small amount of time and result in trees that are not globally optimal. Recently, the LSID3 contract anytime algorithm was introduced to allow using extra resources for building better decision trees. A contract anytime algorithm needs to get its resource allocation a priori. In many cases, however, the time allocation is not known in advance, disallowing the use of contract algorithms. To overcome this problem, in this work we present two interruptible anytime algorithms for inducing decision trees. Interruptible anytime algorithms do not require their resource allocation in advance and thus must be ready to be interrupted and return a valid solution at any moment. The first interruptible algorithm we propose is based on a general technique for converting a contract algorithm to an interruptible one by sequencing. The second is an iterative improvement algorithm that repeatedly selects a subtree whose reconstruction is estimated to yield the highest marginal utility and rebuilds it with higher resource allocation. Empirical evaluation shows a good anytime behavior for both algorithms. The iterative improvement algorithm shows smoother performance profiles which allow more refined control.