Machine Learning
Combination of Multiple Classifiers Using Local Accuracy Estimates
IEEE Transactions on Pattern Analysis and Machine Intelligence
Boosted mixture of experts: an ensemble learning scheme
Neural Computation
Machine Learning
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Combining Pattern Classifiers: Methods and Algorithms
Combining Pattern Classifiers: Methods and Algorithms
Dynamic integration with random forests
ECML'06 Proceedings of the 17th European conference on Machine Learning
Generating estimates of classification confidence for a case-based spam filter
ICCBR'05 Proceedings of the 6th international conference on Case-Based Reasoning Research and Development
Hi-index | 0.00 |
Classifier combining is a popular method for improving quality of classification - instead of using one classifier, several classifiers are organized into a classifier system and their results are aggregated into a final prediction. However, most of the commonly used aggregation methods are static, i.e., they do not adapt to the currently classified pattern. In this paper, we provide a general framework for dynamic classifier systems, which use dynamic confidence measures to adapt to a particular pattern. Our experiments with random forests on 5 artificial and 11 realworld benchmark datasets show that dynamic classifier systems can significantly outperform both confidence-free and static classifier systems.