Machine Learning
The Random Subspace Method for Constructing Decision Forests
IEEE Transactions on Pattern Analysis and Machine Intelligence
Direct generalized additive modeling with penalized likelihood
Computational Statistics & Data Analysis
Machine Learning
Ensembling neural networks: many could be better than all
Artificial Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
Improving nonparametric regression methods by bagging and boosting
Computational Statistics & Data Analysis - Nonlinear methods and data mining
The Case against Accuracy Estimation for Comparing Induction Algorithms
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Crafting Papers on Machine Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Support Vector Machine Ensemble with Bagging
SVM '02 Proceedings of the First International Workshop on Pattern Recognition with Support Vector Machines
Combining Pattern Classifiers: Methods and Algorithms
Combining Pattern Classifiers: Methods and Algorithms
Neural Computation
Machine Learning
Random Forests for land cover classification
Pattern Recognition Letters - Special issue: Pattern recognition in remote sensing (PRRS 2004)
Rotation Forest: A New Classifier Ensemble Method
IEEE Transactions on Pattern Analysis and Machine Intelligence
Bankruptcy prediction by generalized additive models: Research Articles
Applied Stochastic Models in Business and Industry
Computational Statistics & Data Analysis
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
Classifier Ensembles with a Random Linear Oracle
IEEE Transactions on Knowledge and Data Engineering
Empirical characterization of random forest variable importance measures
Computational Statistics & Data Analysis
Random Forests for multiclass classification: Random MultiNomial Logit
Expert Systems with Applications: An International Journal
RotBoost: A technique for combining Rotation Forest and AdaBoost
Pattern Recognition Letters
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 1
Bundling classifiers by bagging trees
Computational Statistics & Data Analysis
Ensemble classification of paired data
Computational Statistics & Data Analysis
An empirical evaluation of rotation-based ensemble classifiers for customer churn prediction
Expert Systems with Applications: An International Journal
Consistency of support vector machines using additive kernels for additive models
Computational Statistics & Data Analysis
Expert Systems with Applications: An International Journal
A new ensemble method for gold mining problems: Predicting technology transfer
Electronic Commerce Research and Applications
DRFLogitBoost: a double randomized decision forest incorporated with logitboosted decision stumps
ACIIDS'12 Proceedings of the 4th Asian conference on Intelligent Information and Database Systems - Volume Part I
Hi-index | 0.03 |
Generalized additive models (GAMs) are a generalization of generalized linear models (GLMs) and constitute a powerful technique which has successfully proven its ability to capture nonlinear relationships between explanatory variables and a response variable in many domains. In this paper, GAMs are proposed as base classifiers for ensemble learning. Three alternative ensemble strategies for binary classification using GAMs as base classifiers are proposed: (i) GAMbag based on Bagging, (ii) GAMrsm based on the Random Subspace Method (RSM), and (iii) GAMens as a combination of both. In an experimental validation performed on 12 data sets from the UCI repository, the proposed algorithms are benchmarked to a single GAM and to decision tree based ensemble classifiers (i.e. RSM, Bagging, Random Forest, and the recently proposed Rotation Forest). From the results a number of conclusions can be drawn. Firstly, the use of an ensemble of GAMs instead of a single GAM always leads to improved prediction performance. Secondly, GAMrsm and GAMens perform comparably, while both versions outperform GAMbag. Finally, the value of using GAMs as base classifiers in an ensemble instead of standard decision trees is demonstrated. GAMbag demonstrates performance comparable to ordinary Bagging. Moreover, GAMrsm and GAMens outperform RSM and Bagging, while these two GAM ensemble variations perform comparably to Random Forest and Rotation Forest. Sensitivity analyses are included for the number of member classifiers in the ensemble, the number of variables included in a random feature subspace and the number of degrees of freedom for GAM spline estimation.