Inferring decision trees using the minimum description length principle
Information and Computation
Boolean Feature Discovery in Empirical Learning
Machine Learning
Symbolic Boolean manipulation with ordered binary-decision diagrams
ACM Computing Surveys (CSUR)
C4.5: programs for machine learning
C4.5: programs for machine learning
Machine Learning
Wrappers for performance enhancement and oblivious decision graphs
Wrappers for performance enhancement and oblivious decision graphs
Tree based discretization for continuous state space reinforcement learning
AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
Journal of Artificial Intelligence Research
Identifying hierarchical structure in sequences: a linear-time algorithm
Journal of Artificial Intelligence Research
Input generalization in delayed reinforcement learning: an algorithm and performance comparisons
IJCAI'91 Proceedings of the 12th international joint conference on Artificial intelligence - Volume 2
SPUDD: stochastic planning using decision diagrams
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
TTree: Tree-Based State Generalization with Temporally Abstract Actions
Proceedings of the 5th International Symposium on Abstraction, Reformulation and Approximation
Hi-index | 0.00 |
While the decision tree is an effective representation that has been used in many domains, a tree can often encode a concept inefficiently. This happens when the tree has to represent a subconcept multiple times in different parts of the tree. In this paper we introduce a new representation based on trees, the linked decision forest, that does not need to repeat internal structure. We also introduce a supervised learning algorithm. Lumberjack, that uses the new representation. We then show empirically that Lumberjack improves generalization accuracy on hierarchically decomposable concepts.