Original Contribution: Stacked generalization
Neural Networks
Machine Learning
Machine Learning
Artificial Intelligence Review - Special issue on lazy learning
The Random Subspace Method for Constructing Decision Forests
IEEE Transactions on Pattern Analysis and Machine Intelligence
Data mining: practical machine learning tools and techniques with Java implementations
Data mining: practical machine learning tools and techniques with Java implementations
Randomizing Outputs to Increase Prediction Accuracy
Machine Learning
Combining Artificial Neural Nets: Ensemble and Modular Multi-Net Systems
Combining Artificial Neural Nets: Ensemble and Modular Multi-Net Systems
Ensembling neural networks: many could be better than all
Artificial Intelligence
A Principal Components Approach to Combining Regression Estimates
Machine Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
How to Make Stacking Better and Faster While Also Taking Care of an Unknown Weakness
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Improving Regressors using Boosting Techniques
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Nearest Neighbors in Random Subspaces
SSPR '98/SPR '98 Proceedings of the Joint IAPR International Workshops on Advances in Pattern Recognition
Methods for Dynamic Classifier Selection
ICIAP '99 Proceedings of the 10th International Conference on Image Analysis and Processing
Comparing Bayes model averaging and stacking when model approximation error cannot be ignored
The Journal of Machine Learning Research
Reduced Ensemble Size Stacking
ICTAI '04 Proceedings of the 16th IEEE International Conference on Tools with Artificial Intelligence
Issues in stacked generalization
Journal of Artificial Intelligence Research
Ensemble construction via designed output distortion
MCS'03 Proceedings of the 4th international conference on Multiple classifier systems
Extraction of rules from artificial neural networks for nonlinear regression
IEEE Transactions on Neural Networks
Non-strict heterogeneous Stacking
Pattern Recognition Letters
Hi-index | 0.00 |
In this paper we investigate an algorithmic extension to the technique of Stacking for regression that prunes the ensemble set before application based on a consideration of the training accuracy and diversity of the ensemble members. We evaluate two variants of this approach in comparison to the standard Stacking algorithm, one of which is a static approach that prunes back the ensemble to the same constant size; the other of which is a variable approach prunes the ensemble to an appropriate level based on measures of accuracy and diversity of the ensemble members. We show that on average both techniques are robust in performance to their non-pruned counterpart, while having the advantage of producing smaller and less complex ensembles. In the latter respect, the static approach proved more effective, but we show that the variable approach lends itself better for further optimization.