C4.5: programs for machine learning
C4.5: programs for machine learning
Decision Combination in Multiple Classifier Systems
IEEE Transactions on Pattern Analysis and Machine Intelligence
Democracy in neural nets: voting schemes for classification
Neural Networks
A Method of Combining Multiple Experts for the Recognition of Unconstrained Handwritten Numerals
IEEE Transactions on Pattern Analysis and Machine Intelligence
Machine Learning
Error reduction through learning multiple descriptions
Machine Learning
Combining Nearest Neighbor Classifiers Through Multiple Feature Subsets
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Option Decision Trees with Majority Votes
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Generating Classifier Commitees by Stochastically Selecting both Attributes and Training Examples
PRICAI '98 Proceedings of the 5th Pacific Rim International Conference on Artificial Intelligence: Topics in Artificial Intelligence
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 1
Application of majority voting to pattern recognition: an analysis of its behavior and performance
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Limiting the Number of Trees in Random Forests
MCS '01 Proceedings of the Second International Workshop on Multiple Classifier Systems
Feature Subsets for Classifier Combination: An Enumerative Experiment
MCS '01 Proceedings of the Second International Workshop on Multiple Classifier Systems
Distributed Pasting of Small Votes
MCS '02 Proceedings of the Third International Workshop on Multiple Classifier Systems
Ordinal classification with monotonicity constraints by variable consistency bagging
RSCTC'10 Proceedings of the 7th international conference on Rough sets and current trends in computing
Variable consistency bagging ensembles
Transactions on Rough Sets XI
Hi-index | 0.00 |
Recent classifier combination frameworks have proposed several ways of weakening a learning set and have shown that these weakening methods improve prediction accuracy. In the present paper we focus on learning set sampling (Breiman's bagging) and random feature subset selections (Bay's Multiple Feature Subsets). We present a combination scheme labeled 'Bagfs', in which new learning sets are generated on the basis of both bootstrap replicates and selected feature subsets. The performances of the three methods (Bagging, MFS and Bagfs) are assessed by means of a decision-tree inducer (C4.5) and a majority voting rule. In addition, we also study whether the way in which weak classifiers are created has a significant influence on the performance of their combination. To answer this question, we undertook the strict application of the Cochran Q test. This test enabled us to compare the three weakening methods together on a given database, and to conclude whether or not these methods differ significantly. We also used the McNemar test to compare algorithms pair by pair. The first results, obtained on 14 conventional databases, show that on average, Bagfs exhibits the best agreement between prediction and supervision. The Cochran Q test indicated that the weak classifiers so created significantly influenced combination performance in the case of at least 4 of the 14 databases analyzed.