The nature of statistical learning theory
The nature of statistical learning theory
Machine learning, neural and statistical classification
Machine learning, neural and statistical classification
Machine Learning
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Theoretical Analysis and Improved Decision Criteria for the n-Tuple Classifier
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
IEEE Transactions on Pattern Analysis and Machine Intelligence
Proceedings of the First International Workshop on Multiple Classifier Systems
MCS '00 Proceedings of the First International Workshop on Multiple Classifier Systems
Experiments with Classifier Combining Rules
MCS '00 Proceedings of the First International Workshop on Multiple Classifier Systems
Ensemble Methods in Machine Learning
MCS '00 Proceedings of the First International Workshop on Multiple Classifier Systems
Hi-index | 0.00 |
In order to determine the output from an aggregated classifier a number of methods exists. A common approach is to apply the majority-voting scheme. If the performance of the classifiers can be ranked in some intelligent way, the voting process can be modified by assigning individual weights to each of the ensemble members. For some base classifiers, like decision trees, a given node or leaf is activated if the input lies within a well-defined region in input space. In other words, each leaf-node can be considered as defining a given feature in input space. In this paper, we present a method for adjusting the voting process of an ensemble by assigning individual weights to this set of features, implying that different nodes of the same decision tree can contribute differently to the overall voting process. By using a randomised "look-up technique" for the training examples the weights used in the decision process is determined using a perceptron-like learning rule. We present results obtained by applying such a technique to bagged ensembles of C4.5 trees and to the socalled PERT classifier, which is an ensemble of highly randomised decision trees. The proposed technique is compared to the majority-voting scheme on a number of data sets.