Machine Learning
The Random Subspace Method for Constructing Decision Forests
IEEE Transactions on Pattern Analysis and Machine Intelligence
Feature selection for ensembles
AAAI '99/IAAI '99 Proceedings of the sixteenth national conference on Artificial intelligence and the eleventh Innovative applications of artificial intelligence conference innovative applications of artificial intelligence
Feature Selection for Knowledge Discovery and Data Mining
Feature Selection for Knowledge Discovery and Data Mining
Use of Contextual Information for Feature Ranking and Discretization
IEEE Transactions on Knowledge and Data Engineering
Correlation-based Feature Selection for Discrete and Numeric Class Machine Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Bagging and Boosting with Dynamic Integration of Classifiers
PKDD '00 Proceedings of the 4th European Conference on Principles of Data Mining and Knowledge Discovery
Ensembles of Learning Machines
WIRN VIETRI 2002 Proceedings of the 13th Italian Workshop on Neural Nets-Revised Papers
Ensemble Feature Selection Based on Contextual Merit and Correlation Heuristics
ADBIS '01 Proceedings of the 5th East European Conference on Advances in Databases and Information Systems
Ensemble Methods in Machine Learning
MCS '00 Proceedings of the First International Workshop on Multiple Classifier Systems
Combining Pattern Classifiers: Methods and Algorithms
Combining Pattern Classifiers: Methods and Algorithms
Combining bagging, boosting, rotation forest and random subspace methods
Artificial Intelligence Review
Hi-index | 0.01 |
Improving classification performance of learning systems can be achieved by constructing multiple classifiers which include sets of sub-classifiers, whose individual predictions are combined to classify new objects. The diversification of sub-classifiers is one of necessary conditions for improving the classification accuracy. To obtain more diverse sub-classifiers we extend the bagging approach by integrating sampling different distributions of learning examples with selecting multiple subsets of features. We summarize results of our experiments on studying the usefulness of different feature selection techniques in this extension. The main aim of the paper is to examine the use of three methods for aggregating predictions of sub-classifiers in the extended bagging classifier. Our experimental results show that the extended classifier, with a dynamic choice of answers instead of a simple voting aggregation rule, is more accurate than standard bagging.