Functional Models for Regression Tree Leaves
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Trading-Off Local versus Global Effects of Regression Nodes in Model Trees
ISMIS '02 Proceedings of the 13th International Symposium on Foundations of Intelligent Systems
SECRET: a scalable linear regression tree algorithm
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
Numerical Analysis in Modern Scientific Computing: An Introduction
Numerical Analysis in Modern Scientific Computing: An Introduction
Incremental Learning of Linear Model Trees
Machine Learning
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Learning model trees from evolving data streams
Data Mining and Knowledge Discovery
Temporal multi-hierarchy smoothing for estimating rates of rare events
Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining
Hinging hyperplane models for multiple predicted variables
SSDBM'12 Proceedings of the 24th international conference on Scientific and Statistical Database Management
Using turning point detection to obtain better regression trees
MLDM'13 Proceedings of the 9th international conference on Machine Learning and Data Mining in Pattern Recognition
Hi-index | 0.00 |
Most decision tree algorithms base their splitting decisions on a piecewise constant model. Often these splitting algorithms are extrapolated to trees with non-constant models at the leaf nodes. The motivation behind Look-ahead Linear Regression Trees (LLRT) is that out of all the methods proposed to date, there has been no scalable approach to exhaustively evaluate all possible models in the leaf nodes in order to obtain an optimal split. Using several optimizations, LLRT is able to generate and evaluate thousands of linear regression models per second. This allows for a near-exhaustive evaluation of all possible splits in a node, based on the quality of fit of linear regression models in the resulting branches. We decompose the calculation of the Residual Sum of Squares in such a way that a large part of it is pre-computed. The resulting method is highly scalable. We observe it to obtain high predictive accuracy for problems with strong mutual dependencies between attributes. We report on experiments with two simulated and seven real data sets.