Original Contribution: Stacked generalization
Neural Networks
C4.5: programs for machine learning
C4.5: programs for machine learning
Using Correspondence Analysis to Combine Classifiers
Machine Learning
How to Make Stacking Better and Faster While Also Taking Care of an Unknown Weakness
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Combining Multiple Models with Meta Decision Trees
PKDD '00 Proceedings of the 4th European Conference on Principles of Data Mining and Knowledge Discovery
An Evaluation of Grading Classifiers
IDA '01 Proceedings of the 4th International Conference on Advances in Intelligent Data Analysis
Training linear SVMs in linear time
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
Regularized Linear Models in Stacked Generalization
MCS '09 Proceedings of the 8th International Workshop on Multiple Classifier Systems
Issues in stacked generalization
Journal of Artificial Intelligence Research
GA-stacking: Evolutionary stacked generalization
Intelligent Data Analysis
Estimating continuous distributions in Bayesian classifiers
UAI'95 Proceedings of the Eleventh conference on Uncertainty in artificial intelligence
Linear classifier combination and selection using group sparse regularization and hinge loss
Pattern Recognition Letters
Hi-index | 0.00 |
Ensemble learning refers to the methods that combine multiple models to improve the performance. Ensemble methods, such as stacking, have been intensively studied, and can bring slight performance improvement. However, there is no guarantee that a stacking algorithm outperforms all base classifiers. In this paper, we propose a new stacking algorithm, where the predictive scores of each possible class label returned by the base classifiers are firstly collected by the meta-learner, and then all possible class labels are reranked according to the scores. This algorithm is able to find the best linear combination of the base classifiers on the training samples, which make sure it outperforms all base classifiers during training process. The experiments conducted on several public datasets show that the proposed algorithm outperforms the baseline algorithms and several state-of-the-art stacking algorithms.