An adaptive version of the boost by majority algorithm
COLT '99 Proceedings of the twelfth annual conference on Computational learning theory
Evolving Teams of Predictors with Linear Genetic Programming
Genetic Programming and Evolvable Machines
A decision-theoretic generalization of on-line learning and an application to boosting
EuroCOLT '95 Proceedings of the Second European Conference on Computational Learning Theory
Applying Boosting Techniques to Genetic Programming
Selected Papers from the 5th European Conference on Artificial Evolution
Ensemble Methods in Machine Learning
MCS '00 Proceedings of the First International Workshop on Multiple Classifier Systems
MCS '02 Proceedings of the Third International Workshop on Multiple Classifier Systems
An empirical comparison of supervised machine learning techniques in bioinformatics
APBC '03 Proceedings of the First Asia-Pacific bioinformatics conference on Bioinformatics 2003 - Volume 19
Spatially Structured Evolutionary Algorithms: Artificial Evolution in Space and Time (Natural Computing Series)
Proceedings of the 10th annual conference on Genetic and evolutionary computation
Proceedings of the 10th annual conference on Genetic and evolutionary computation
A study of cross-validation and bootstrap for accuracy estimation and model selection
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
GP ensembles for large-scale data classification
IEEE Transactions on Evolutionary Computation
Semi-supervised genetic programming for classification
Proceedings of the 13th annual conference on Genetic and evolutionary computation
Hi-index | 0.00 |
This work presents a new evolutionary ensemble method for data classification, which is inspired by the concepts of bagging and boosting, and aims at combining their good features while avoiding their weaknesses. The approach is based on a distributed multiple-population genetic programming (GP) algorithm which exploits the technique of coevolution at two levels. On the inter-population level the populations cooperate in a semi-isolated fashion, whereas on the intra-population level the candidate classifiers coevolve competitively with the training data samples. The final classifier is a voting committee composed by the best members of all the populations. The experiments performed in a varying number of populations show that our approach outperforms both bagging and boosting for a number of benchmark problems.