Neural networks and the bias/variance dilemma
Neural Computation
Original Contribution: Stacked generalization
Neural Networks
Machine Learning
Machine Learning
Machine Learning
Error reduction through learning multiple descriptions
Machine Learning
Artificial Intelligence Review - Special issue on lazy learning
The Random Subspace Method for Constructing Decision Forests
IEEE Transactions on Pattern Analysis and Machine Intelligence
Feature selection for ensembles
AAAI '99/IAAI '99 Proceedings of the sixteenth national conference on Artificial intelligence and the eleventh Innovative applications of artificial intelligence conference innovative applications of artificial intelligence
Data mining: practical machine learning tools and techniques with Java implementations
Data mining: practical machine learning tools and techniques with Java implementations
Ensemble learning via negative correlation
Neural Networks
Randomizing Outputs to Increase Prediction Accuracy
Machine Learning
Combining Artificial Neural Nets: Ensemble and Modular Multi-Net Systems
Combining Artificial Neural Nets: Ensemble and Modular Multi-Net Systems
Machine Learning
A Principal Components Approach to Combining Regression Estimates
Machine Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Improving Regressors using Boosting Techniques
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Nearest Neighbors in Random Subspaces
SSPR '98/SPR '98 Proceedings of the Joint IAPR International Workshops on Advances in Pattern Recognition
Methods for Dynamic Classifier Selection
ICIAP '99 Proceedings of the 10th International Conference on Image Analysis and Processing
Exploring the Parameter State Space of Stacking
ICDM '02 Proceedings of the 2002 IEEE International Conference on Data Mining
Managing Diversity in Regression Ensembles
The Journal of Machine Learning Research
Pruning extensions to stacking
Intelligent Data Analysis
Issues in stacked generalization
Journal of Artificial Intelligence Research
Ensemble construction via designed output distortion
MCS'03 Proceedings of the 4th international conference on Multiple classifier systems
Hi-index | 0.10 |
In this paper, we evaluate a new ensemble schema for regression, where the ensemble is composed of a number of models where each model is built using feature sampled data using a learning algorithm drawn from a set of simple and stable learning algorithms, and the ensemble integration method is Stacking. We evaluate this schema referred to as non-strict heterogeneous Stacking to a number of baseline methods and to strict heterogeneous Stacking, which uses the same number of models as there are base learning algorithms, built using un-sampled data. We demonstrate that non-strict Stacking for the set of base learning algorithms evaluated, strongly outperformed the baseline methods. In addition the added flexibility of non-strict Stacking, allowed it both to outperform strict Stacking and homogeneous Stacking for the same set of base learning algorithms considered. We discuss the conditions in general where non-strict heterogeneous Stacking is likely to be advantageous.