On fast supervised learning for normal mixture models with missing information
Pattern Recognition
MCS'07 Proceedings of the 7th international conference on Multiple classifier systems
How do correlation and variance of base-experts affect fusion in biometric authentication tasks?
IEEE Transactions on Signal Processing
An experimental study of one- and two-level classifier fusion for different sample sizes
Pattern Recognition Letters
Learning ensemble classifiers via restricted Boltzmann machines
Pattern Recognition Letters
Hi-index | 0.00 |
The meta-learner MLR (Multi-response Linear Regression) has been proposed as a trainable combiner for fusing heterogeneous base-level classifiers. Although it has interesting properties, it never has been evaluated extensively up to now. This paper employs learning curves to investigate the relative performance of MLR for solving multi-class classification problems in comparison with other trainable combiners. Several strategies (namely, Reusing , Validation and Stacking ) are considered for using the available data to train both the base-level classifiers and the combiner. Experimental results show that due to the limited complexity of MLR , it can outperform the other combiners for small sample sizes when the Validation or Stacking strategy is adopted. Therefore, MLR should be a preferential choice of trainable combiners when solving a multi-class task with small sample size.