International Journal of Man-Machine Studies - Special Issue: Knowledge Acquisition for Knowledge-based Systems. Part 5
Original Contribution: Stacked generalization
Neural Networks
C4.5: programs for machine learning
C4.5: programs for machine learning
Machine Learning
Combining labeled and unlabeled data with co-training
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Data mining: practical machine learning tools and techniques with Java implementations
Data mining: practical machine learning tools and techniques with Java implementations
Machine Learning
Cost complexity-based pruning of ensemble classifiers
Knowledge and Information Systems
Using Correspondence Analysis to Combine Classifiers
Machine Learning
Machine Learning
Machine Learning
Combining Classifiers by Constructive Induction
ECML '98 Proceedings of the 10th European Conference on Machine Learning
Shared Ensemble Learning Using Multi-trees
IBERAMIA 2002 Proceedings of the 8th Ibero-American Conference on AI: Advances in Artificial Intelligence
From Ensemble Methods to Comprehensible Models
DS '02 Proceedings of the 5th International Conference on Discovery Science
Option Decision Trees with Majority Votes
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Extracting decision trees from trained neural networks
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
ICPR '98 Proceedings of the 14th International Conference on Pattern Recognition-Volume 1 - Volume 1
Extracting comprehensible models from trained neural networks
Extracting comprehensible models from trained neural networks
Beam search extraction and forgetting strategies on shared ensembles
MCS'03 Proceedings of the 4th international conference on Multiple classifier systems
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 1
Evolutionary approach to combined multiple models tuning
International Journal of Knowledge-based and Intelligent Engineering Systems - Extended papers selected from KES-2006
Evolutionary tuning of combined multiple models
KES'06 Proceedings of the 10th international conference on Knowledge-Based Intelligent Information and Engineering Systems - Volume Part II
Hi-index | 0.00 |
The combination of classifiers is a powerful tool to improve the accuracy of classifiers, by using the prediction of multiple models and combining them. Many practical and useful combination techniques work by using the output of several classifiers as the input of a second layer classifier. The problem of this and other multi-classifier approaches is that huge amounts of memory are required to store a set of multiple classifiers and, more importantly, the comprehensibility of a single classifier is lost and no knowledge or insight can be acquired from the model. In order to overcome these limitations, in this work we analyse the idea of "mimicking" the semantics of an ensemble of classifiers. More precisely, we use the combination of classifiers for labelling an invented random dataset, and then, we use this artificially labelled dataset to re-train one single model. This model has the following advantages: it is almost similar to the highly accurate combined model, as a single solution it requires much fewer memory resources, no additional validation test must be reserved to do this procedure and, more importantly, the resulting model is expressed as a single classifier in terms of the original attributes and, hence, it can be comprehensible. First, we illustrate this methodology using a popular data-mining package, showing that it can spread into common practice, and then we use our system SMILES, which automates the process and takes advantage of its ensemble method.