The Strength of Weak Learnability
Machine Learning
C4.5: programs for machine learning
C4.5: programs for machine learning
Machine Learning
The Random Subspace Method for Constructing Decision Forests
IEEE Transactions on Pattern Analysis and Machine Intelligence
Data mining: practical machine learning tools and techniques with Java implementations
Data mining: practical machine learning tools and techniques with Java implementations
Machine Learning
Distributed learning with bagging-like performance
Pattern Recognition Letters
Logistic Regression, AdaBoost and Bregman Distances
COLT '00 Proceedings of the Thirteenth Annual Conference on Computational Learning Theory
A Comparison of Decision Tree Ensemble Creation Techniques
IEEE Transactions on Pattern Analysis and Machine Intelligence
Immune network based ensembles
Neurocomputing
Nonlinear Boosting Projections for Ensemble Construction
The Journal of Machine Learning Research
Boosting random subspace method
Neural Networks
Constructing ensembles of classifiers by means of weighted instance selection
IEEE Transactions on Neural Networks
Contextual classifier ensembles
BIS'07 Proceedings of the 10th international conference on Business information systems
PRIB'07 Proceedings of the 2nd IAPR international conference on Pattern recognition in bioinformatics
A comparison of three voting methods for bagging with the MLEM2 algorithm
IDEAL'10 Proceedings of the 11th international conference on Intelligent data engineering and automated learning
RSFDGrC'05 Proceedings of the 10th international conference on Rough Sets, Fuzzy Sets, Data Mining, and Granular Computing - Volume Part I
Hi-index | 0.00 |
We experimentally evaluate randomization-based approachesto creating an ensemble of decision-tree classifiers.Unlike methods related to boosting, all of the eightapproaches considered here create each classifier in an ensembleindependently of the other classifiers. Experimentswere performed on 28 publicly available datasets, usingC4.5 release 8 as the base classifier. While each of the otherseven approaches has some strengths, we find that none ofthem is consistently more accurate than standard baggingwhen tested for statistical significance.