Instance-Based Learning Algorithms
Machine Learning
Back propagation is sensitive to initial conditions
NIPS-3 Proceedings of the 1990 conference on Advances in neural information processing systems 3
Adaptation in natural and artificial systems
Adaptation in natural and artificial systems
Induction of one-level decision trees
ML92 Proceedings of the ninth international workshop on Machine learning
Original Contribution: Stacked generalization
Neural Networks
C4.5: programs for machine learning
C4.5: programs for machine learning
An introduction to genetic algorithms
An introduction to genetic algorithms
Machine Learning
Data mining: practical machine learning tools and techniques with Java implementations
Data mining: practical machine learning tools and techniques with Java implementations
Genetic Algorithms in Search, Optimization and Machine Learning
Genetic Algorithms in Search, Optimization and Machine Learning
Machine Learning
Ensembling neural networks: many could be better than all
Artificial Intelligence
Using Correspondence Analysis to Combine Classifiers
Machine Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
ECML '95 Proceedings of the 8th European Conference on Machine Learning
Classification by Voting Feature Intervals
ECML '97 Proceedings of the 9th European Conference on Machine Learning
How to Make Stacking Better and Faster While Also Taking Care of an Unknown Weakness
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Generating Accurate Rule Sets Without Global Optimization
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
A decision-theoretic generalization of on-line learning and an application to boosting
EuroCOLT '95 Proceedings of the Second European Conference on Computational Learning Theory
An Evaluation of Grading Classifiers
IDA '01 Proceedings of the 4th International Conference on Advances in Intelligent Data Analysis
Stacking with Multi-response Model Trees
MCS '02 Proceedings of the Third International Workshop on Multiple Classifier Systems
Stacked generalization: when does it work?
IJCAI'97 Proceedings of the Fifteenth international joint conference on Artifical intelligence - Volume 2
Solving multiclass learning problems via error-correcting output codes
Journal of Artificial Intelligence Research
Issues in stacked generalization
Journal of Artificial Intelligence Research
Estimating continuous distributions in Bayesian classifiers
UAI'95 Proceedings of the Eleventh conference on Uncertainty in artificial intelligence
Reranking for stacking ensemble learning
ICONIP'10 Proceedings of the 17th international conference on Neural information processing: theory and algorithms - Volume Part I
Linear classifier combination and selection using group sparse regularization and hinge loss
Pattern Recognition Letters
Hi-index | 0.00 |
Stacking is a widely used technique for combining classifiers and improving prediction accuracy. Early research in Stacking showed that selecting the right classifiers, their parameters and the meta-classifiers was a critical issue. Most of the research on this topic hand picks the right combination of classifiers and their parameters. Instead of starting from these initial strong assumptions, our approach uses genetic algorithms to search for good Stacking configurations. Since this can lead to overfitting, one of the goals of this paper is to empirically evaluate the overall efficiency of the approach. A second goal is to compare our approach with the current best Stacking building techniques. The results show that our approach finds Stacking configurations that, in the worst case, perform as well as the best techniques, with the advantage of not having to manually set up the structure of the Stacking system.