Original Contribution: Stacked generalization
Neural Networks
C4.5: programs for machine learning
C4.5: programs for machine learning
Machine Learning
Randomizing Outputs to Increase Prediction Accuracy
Machine Learning
Machine Learning
Combining Artificial Neural Nets: Ensemble and Modular Multi-Net Systems
Combining Artificial Neural Nets: Ensemble and Modular Multi-Net Systems
Machine Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
A decision-theoretic generalization of on-line learning and an application to boosting
EuroCOLT '95 Proceedings of the Second European Conference on Computational Learning Theory
Inference for the Generalization Error
Machine Learning
ICPR '98 Proceedings of the 14th International Conference on Pattern Recognition-Volume 1 - Volume 1
The Journal of Machine Learning Research
Bias-Variance Analysis of Support Vector Machines for the Development of SVM-Based Ensemble Methods
The Journal of Machine Learning Research
Solving multiclass learning problems via error-correcting output codes
Journal of Artificial Intelligence Research
Switching class labels to generate classification ensembles
Pattern Recognition
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 1
Building ensembles of neural networks with class-switching
ICANN'06 Proceedings of the 16th international conference on Artificial Neural Networks - Volume Part I
Balanced boosting with parallel perceptrons
IWANN'05 Proceedings of the 8th international conference on Artificial Neural Networks: computational Intelligence and Bioinspired Systems
Statistical Instance-Based Ensemble Pruning for Multi-class Problems
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part I
Edited AdaBoost by weighted kNN
Neurocomputing
Ensemble classifier generation using non-uniform layered clustering and Genetic Algorithm
Knowledge-Based Systems
Reducing overfitting of AdaBoost by clustering-based pruning of hard examples
Proceedings of the 7th International Conference on Ubiquitous Information Management and Communication
Hi-index | 0.01 |
This article investigates the properties of class-switching ensembles composed of neural networks and compares them to class-switching ensembles of decision trees and to standard ensemble learning methods, such as bagging and boosting. In a class-switching ensemble, each learner is constructed using a modified version of the training data. This modification consists in switching the class labels of a fraction of training examples that are selected at random from the original training set. Experiments on 20 benchmark classification problems, including real-world and synthetic data, show that class-switching ensembles composed of neural networks can obtain significant improvements in the generalization accuracy over single neural networks and bagging and boosting ensembles. Furthermore, it is possible to build medium-sized ensembles (~200 networks) whose classification performance is comparable to larger class-switching ensembles (~1000 learners) of unpruned decision trees.