Better subset regression using the nonnegative garrote
Technometrics
Machine Learning
Prediction games and arcing algorithms
Neural Computation
Machine Learning
Sparse Regression Ensembles in Infinite and Finite Hypothesis Spaces
Machine Learning
Knot selection by boosting techniques
Computational Statistics & Data Analysis
Boosting nonlinear additive autoregressive time series
Computational Statistics & Data Analysis
Grouped graphical Granger modeling methods for temporal causal modeling
Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining
Iterative bias reduction: a comparative study
Statistics and Computing
Hi-index | 0.00 |
We propose Sparse Boosting (the SparseL2Boost algorithm), a variant on boosting with the squared error loss. SparseL2Boost yields sparser solutions than the previously proposed L2Boosting by minimizing some penalized L2-loss functions, the FPE model selection criteria, through small-step gradient descent. Although boosting may give already relatively sparse solutions, for example corresponding to the soft-thresholding estimator in orthogonal linear models, there is sometimes a desire for more sparseness to increase prediction accuracy and ability for better variable selection: such goals can be achieved with SparseL2Boost. We prove an equivalence of SparseL2Boost to Breiman's nonnegative garrote estimator for orthogonal linear models and demonstrate the generic nature of SparseL2Boost for nonparametric interaction modeling. For an automatic selection of the tuning parameter in SparseL2Boost we propose to employ the gMDL model selection criterion which can also be used for early stopping of L2Boosting. Consequently, we can select between SparseL2Boost and L2Boosting by comparing their gMDL scores.