Original Contribution: Stacked generalization
Neural Networks
Machine Learning
Separate-and-Conquer Rule Learning
Artificial Intelligence Review
MetaCost: a general method for making classifiers cost-sensitive
KDD '99 Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining
How to Make Stacking Better and Faster While Also Taking Care of an Unknown Weakness
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
The Journal of Machine Learning Research
Using induced rules as complex features in memory-based language learning
ConLL '00 Proceedings of the 2nd workshop on Learning language in logic and the 4th conference on Computational natural language learning - Volume 7
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
Efficient Pairwise Multilabel Classification for Large-Scale Problems in the Legal Domain
ECML PKDD '08 Proceedings of the European conference on Machine Learning and Knowledge Discovery in Databases - Part II
Journal of Artificial Intelligence Research
Solving multiclass learning problems via error-correcting output codes
Journal of Artificial Intelligence Research
Issues in stacked generalization
Journal of Artificial Intelligence Research
Rule Extraction from Support Vector Machines
Rule Extraction from Support Vector Machines
Efficient multilabel classification algorithms for large-scale problems in the legal domain
Semantic Processing of Legal Texts
Hi-index | 0.00 |
In this paper, we present an approach for compressing a rulebased pairwise classifier ensemble into a single rule set that can be directly used for classification. The key idea is to re-encode the training examples using information about which of the original rules of the ensemble cover the example, and to use them for training a rule-based meta-level classifier. We not only show that this approach is more accurate than using the same rule learner at the base level (which could have been expected for such a variant of stacking), but also demonstrate that the resulting meta-level rule set can be straight-forwardly translated back into a rule set at the base level. Our key result is that the rule sets obtained in this way are of comparable complexity to those of the original rule learner, but considerably more accurate.