Machine Learning
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
On the Optimality of the Simple Bayesian Classifier under Zero-One Loss
Machine Learning - Special issue on learning with probabilistic representations
The Random Subspace Method for Constructing Decision Forests
IEEE Transactions on Pattern Analysis and Machine Intelligence
Improved Boosting Algorithms Using Confidence-rated Predictions
Machine Learning - The Eleventh Annual Conference on computational Learning Theory
MultiBoosting: A Technique for Combining Boosting and Wagging
Machine Learning
Machine Learning
On Comparing Classifiers: Pitfalls toAvoid and a Recommended Approach
Data Mining and Knowledge Discovery
Automatic Construction of Decision Trees from Data: A Multi-Disciplinary Survey
Data Mining and Knowledge Discovery
Stacking Bagged and Dagged Models
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Generating Accurate Rule Sets Without Global Optimization
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Ensemble Methods in Machine Learning
MCS '00 Proceedings of the First International Workshop on Multiple Classifier Systems
Machine Learning
Memory-based morphological analysis
ACL '99 Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics
Feature combination using boosting
Pattern Recognition Letters
Rotation Forest: A New Classifier Ensemble Method
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Theoretical Analysis of Bagging as a Linear Combination of Classifiers
IEEE Transactions on Pattern Analysis and Machine Intelligence
Combining Answers of Sub-classifiers in the Bagging-Feature Ensembles
RSEISP '07 Proceedings of the international conference on Rough Sets and Intelligent Systems Paradigms
Boosting random subspace method
Neural Networks
Bagging, Random Subspace Method and Biding
SSPR & SPR '08 Proceedings of the 2008 Joint IAPR International Workshop on Structural, Syntactic, and Statistical Pattern Recognition
Constructing diverse classifier ensembles using artificial training examples
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
Combining bagging and random subspaces to create better ensembles
IDA'07 Proceedings of the 7th international conference on Intelligent data analysis
ICAPR'05 Proceedings of the Third international conference on Advances in Pattern Recognition - Volume Part I
Dynamics of variance reduction in bagging and other techniques based on randomisation
MCS'05 Proceedings of the 6th international conference on Multiple Classifier Systems
Investigation of random subspace and random forest methods applied to property valuation data
ICCCI'11 Proceedings of the Third international conference on Computational collective intelligence: technologies and applications - Volume Part I
ACIIDS'12 Proceedings of the 4th Asian conference on Intelligent Information and Database Systems - Volume Part I
Investigation of rotation forest method applied to property price prediction
ICAISC'12 Proceedings of the 11th international conference on Artificial Intelligence and Soft Computing - Volume Part I
Investigation of random subspace and random forest regression models using data with injected noise
KES'12 Proceedings of the 16th international conference on Knowledge Engineering, Machine Learning and Lattice Computing with Applications
Hi-index | 0.00 |
Bagging, boosting, rotation forest and random subspace methods are well known re-sampling ensemble methods that generate and combine a diversity of learners using the same learning algorithm for the base-classifiers. Boosting and rotation forest algorithms are considered stronger than bagging and random subspace methods on noise-free data. However, there are strong empirical indications that bagging and random subspace methods are much more robust than boosting and rotation forest in noisy settings. For this reason, in this work we built an ensemble of bagging, boosting, rotation forest and random subspace methods ensembles with 6 sub-classifiers in each one and then a voting methodology is used for the final prediction. We performed a comparison with simple bagging, boosting, rotation forest and random subspace methods ensembles with 25 sub-classifiers, as well as other well known combining methods, on standard benchmark datasets and the proposed technique had better accuracy in most cases.