Machine Learning
Variance and Bias for General Loss Functions
Machine Learning
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Influence of Hyperparameters on Random Forest Accuracy
MCS '09 Proceedings of the 8th International Workshop on Multiple Classifier Systems
Solving multiclass learning problems via error-correcting output codes
Journal of Artificial Intelligence Research
Error-correcting output codes: a general method for improving multiclass inductive learning programs
AAAI'91 Proceedings of the ninth National conference on Artificial intelligence - Volume 2
ICPR '10 Proceedings of the 2010 20th International Conference on Pattern Recognition
Class-Separability weighting and bootstrapping in error correcting output code ensembles
MCS'10 Proceedings of the 9th international conference on Multiple Classifier Systems
Accuracy/Diversity and Ensemble MLP Classifier Design
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
We compare experimentally the performance of three approaches to ensemble-based classification on general multi-class datasets. These are the methods of random forest, error-correcting output codes (ECOC) and ECOC enhanced by the use of bootstrapping and classseparability weighting (ECOC-BW). These experiments suggest that ECOC-BW yields better generalisation performance than either random forest or unmodified ECOC. A bias-variance analysis indicates that ECOC benefits from reduced bias, when compared to random forest, and that ECOC-BW benefits additionally from reduced variance. One disadvantage of ECOC-based algorithms, however, when compared with random forest, is that they impose a greater computational demand leading to longer training times.