Neural networks and the bias/variance dilemma
Neural Computation
Original Contribution: Stacked generalization
Neural Networks
Decision Combination in Multiple Classifier Systems
IEEE Transactions on Pattern Analysis and Machine Intelligence
Machine Learning
Optimal linear combinations of neural networks
Neural Networks
Soft combination of neural classifiers: a comparative study
Pattern Recognition Letters
Machine Learning
Methods for Designing Multiple Classifier Systems
MCS '01 Proceedings of the Second International Workshop on Multiple Classifier Systems
A comparison of some error estimates for neural network models
Neural Computation
Using an ensemble of classifiers to audit a production classifier
MCS'05 Proceedings of the 6th international conference on Multiple Classifier Systems
Switching between selection and fusion in combining classifiers: anexperiment
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Confidence estimation methods for neural networks: a practical comparison
IEEE Transactions on Neural Networks
Multicriteria decision making (MCDM): a framework for research and applications
IEEE Computational Intelligence Magazine
International Journal of Approximate Reasoning
A randomized model ensemble approach for reconstructing signals from faulty sensors
Expert Systems with Applications: An International Journal
Lazy meta-learning: creating customized model ensembles on demand
WCCI'12 Proceedings of the 2012 World Congress conference on Advances in Computational Intelligence
Hi-index | 0.00 |
Fusing the outputs of an ensemble of diverse predictive models usually boosts overall prediction accuracy. Such fusion is guided by each model's local performance, i.e., each model's prediction accuracy in the neighborhood of the probe point. Therefore, for each probe we instantiate a customized fusion mechanism. The fusion mechanism is a meta-model, i.e., a model that operates one level above the object-level models whose predictions we want to fuse. Like these models, such a meta-model is defined by structural and parametric information. In this paper, we focus on the definition of the parametric information for a given structure. For each probe point, we either retrieve or compute the parameters to instantiate the associated meta-model. The retrieval approach is based on a CART-derived segmentation of the probe's state space, which contains the meta-model parameters. The computation approach is based on a run-time evaluation of each model's local performance in the neighborhood of the probe. We explore various structures for the meta-model, and for each structure we compare the pre-compiled (retrieval) or run-time (computation) approaches. We demonstrate this fusion methodology in the context of multiple neural network models. However, our methodology is broadly applicable to other predictive modeling approaches. This fusion method is illustrated in the development of highly accurate models for emissions, efficiency, and load prediction in a complex power plant. The locally weighted fusion method boosts the predictive performance by 30-50% over the baseline single model approach for the various prediction targets. Relative to this approach, typical fusion strategies that use averaging or globally weighting schemes only produce a 2-6% performance boost over the same baseline.