Information Processing Letters
Designing Storage Efficient Decision Trees
IEEE Transactions on Computers
C4.5: programs for machine learning
C4.5: programs for machine learning
Deliberation scheduling for problem solving in time-constrained environments
Artificial Intelligence
Improving greedy algorithms by lookahead-search
Journal of Algorithms
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
Machine Learning
Learning as Optimization: Stochastic Generation of Multiple Knowledge
ML '92 Proceedings of the Ninth International Workshop on Machine Learning
A Brief Introduction to Boosting
IJCAI '99 Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence
Perceptrons: An Introduction to Computational Geometry
Perceptrons: An Introduction to Computational Geometry
Further experimental evidence against the utility of Occam's razor
Journal of Artificial Intelligence Research
Skewing: an efficient alternative to lookahead for decision tree induction
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
Lookahead and pathology in decision tree induction
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
Look-ahead based fuzzy decision tree induction
IEEE Transactions on Fuzzy Systems
Interruptible anytime algorithms for iterative improvement of decision trees
UBDM '05 Proceedings of the 1st international workshop on Utility-based data mining
Decision-tree instance-space decomposition with grouped gain-ratio
Information Sciences: an International Journal
SMILE: Sound Multi-agent Incremental LEarning
Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems
Multiagent Incremental Learning in Networks
PRIMA '08 Proceedings of the 11th Pacific Rim International Conference on Multi-Agents: Intelligent Agents and Multi-Agent Systems
Quadratic programming formulations for classificationand regression
Optimization Methods & Software - THE JOINT EUROPT-OMS CONFERENCE ON OPTIMIZATION, 4-7 JULY, 2007, PRAGUE, CZECH REPUBLIC, PART II
When a decision tree learner has plenty of time
AAAI'06 proceedings of the 21st national conference on Artificial intelligence - Volume 2
Any time induction of decision trees: an iterative improvement approach
AAAI'06 Proceedings of the 21st national conference on Artificial intelligence - Volume 1
Occam's razor just got sharper
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Proceedings of the 2010 conference on ECAI 2010: 19th European Conference on Artificial Intelligence
Parallel data mining revisited. better, not faster
IDA'12 Proceedings of the 11th international conference on Advances in Intelligent Data Analysis
A novel node splitting criteria for decision trees based on theil index
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part II
A survey of cost-sensitive decision tree induction algorithms
ACM Computing Surveys (CSUR)
Hi-index | 0.00 |
The majority of the existing algorithms for learning decision trees are greedy---a tree is induced top-down, making locally optimal decisions at each node. In most cases, however, the constructed tree is not globally optimal. Furthermore, the greedy algorithms require a fixed amount of time and are not able to generate a better tree if additional time is available. To overcome this problem, we present two lookahead-based algorithms for anytime induction of decision trees, thus allowing tradeoff between tree quality and learning time. The first one is depth-k lookahead, where a larger time allocation permits larger k. The second algorithm uses a novel strategy for evaluating candidate splits; a stochastic version of ID3 is repeatedly invoked to estimate the size of the tree in which each split results, and the one that minimizes the expected size is preferred. Experimental results indicate that for several hard concepts, our proposed approach exhibits good anytime behavior and yields significantly better decision trees when more time is available.