Prediction games and arcing algorithms
Neural Computation
Matching pursuits with time-frequency dictionaries
IEEE Transactions on Signal Processing
Using Boosting to prune Double-Bagging ensembles
Computational Statistics & Data Analysis
Editorial: Special issue on variable selection and robust procedures
Computational Statistics & Data Analysis
Fast robust estimation of prediction error based on resampling
Computational Statistics & Data Analysis
Robust model selection with flexible trimming
Computational Statistics & Data Analysis
Hi-index | 0.05 |
Five robustifications of L"2 boosting for linear regression with various robustness properties are considered. The first two use the Huber loss as implementing loss function for boosting and the second two use robust simple linear regression for the fitting in L"2 boosting (i.e. robust base learners). Both concepts can be applied with or without down-weighting of leverage points. Our last method uses robust correlation estimates and appears to be most robust. Crucial advantages of all methods are that they do not compute covariance matrices of all covariates and that they do not have to identify multivariate leverage points. When there are no outliers, the robust methods are only slightly worse than L"2 boosting. In the contaminated case though, the robust methods outperform L"2 boosting by a large margin. Some of the robustifications are also computationally highly efficient and therefore well suited for truly high-dimensional problems.