Coevolutionary bid-based genetic programming for problem decomposition in classification
Genetic Programming and Evolvable Machines
Engineering Applications of Artificial Intelligence
Mining distributed evolving data streams using fractal GP ensembles
EuroGP'07 Proceedings of the 10th European conference on Genetic programming
An ensemble-based evolutionary framework for coping with distributed intrusion detection
Genetic Programming and Evolvable Machines
Coevolutionary multi-population genetic programming for data classification
Proceedings of the 12th annual conference on Genetic and evolutionary computation
A Bayesian approach for combining ensembles of GP classifiers
MCS'11 Proceedings of the 10th international conference on Multiple classifier systems
Genetic Programming and Evolvable Machines
Handling different categories of concept drifts in data streams using distributed GP
EuroGP'10 Proceedings of the 13th European conference on Genetic Programming
EuroGP'10 Proceedings of the 13th European conference on Genetic Programming
Ensemble image classification method based on genetic image network
EuroGP'10 Proceedings of the 13th European conference on Genetic Programming
Detecting RNA sequences using two-stage SVM classifier
LSMS'07 Proceedings of the 2007 international conference on Life System Modeling and Simulation
Pruning GP-Based classifier ensembles by bayesian networks
PPSN'12 Proceedings of the 12th international conference on Parallel Problem Solving from Nature - Volume Part I
Using Bayesian networks for selecting classifiers in GP ensembles
Information Sciences: an International Journal
Fast classification for large data sets via random selection clustering and Support Vector Machines
Intelligent Data Analysis
Hi-index | 0.00 |
An extension of cellular genetic programming for data classification (CGPC) to induce an ensemble of predictors is presented. Two algorithms implementing the bagging and boosting techniques are described and compared with CGPC. The approach is able to deal with large data sets that do not fit in main memory since each classifier is trained on a subset of the overall training data. The predictors are then combined to classify new tuples. Experiments on several data sets show that, by using a training set of reduced size, better classification accuracy can be obtained, but at a much lower computational cost