The Strength of Weak Learnability
Machine Learning
Error reduction through learning multiple descriptions
Machine Learning
Generalization performance of support vector machines and other pattern classifiers
Advances in kernel methods
Machine Learning
Ensembling neural networks: many could be better than all
Artificial Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
Correlation-based Feature Selection for Discrete and Numeric Class Machine Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
An Empirical Comparison of Pruning Methods for Ensemble Classifiers
IDA '01 Proceedings of the 4th International Conference on Advances in Intelligent Data Analysis
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Model selection for medical diagnosis decision support systems
Decision Support Systems
Ensemble selection from libraries of models
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Detection of land-cover transitions by combining multidate classifiers
Pattern Recognition Letters - Special issue: Pattern recognition for remote sensing (PRRS 2002)
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
Ensemble Pruning Via Semi-definite Programming
The Journal of Machine Learning Research
EROS: Ensemble rough subspaces
Pattern Recognition
The Journal of Machine Learning Research
Mining manufacturing data using genetic algorithm-based feature set decomposition
International Journal of Intelligent Systems Technologies and Applications
Detection of unknown computer worms based on behavioral classification of the host
Computational Statistics & Data Analysis
Negation recognition in medical narrative reports
Information Retrieval
Improving malware detection by applying multi-inducer ensemble
Computational Statistics & Data Analysis
Selective ensemble of decision trees
RSFDGrC'03 Proceedings of the 9th international conference on Rough sets, fuzzy sets, data mining, and granular computing
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 1
Computational Statistics & Data Analysis
Artificial Intelligence Review
A dynamic classifier ensemble selection approach for noise data
Information Sciences: an International Journal
Pruned random subspace method for one-class classifiers
MCS'11 Proceedings of the 10th international conference on Multiple classifier systems
Selective voting in convex-hull ensembles improves classification accuracy
Artificial Intelligence in Medicine
Applying the Publication Power Approach to Artificial Intelligence Journals
Journal of the American Society for Information Science and Technology
Ensemble approaches for regression: A survey
ACM Computing Surveys (CSUR)
Hi-index | 0.03 |
Ensemble methods combine several individual pattern classifiers in order to achieve better classification. The challenge is to choose the minimal number of classifiers that achieve the best performance. An ensemble that contains too many members might incur large storage requirements and even reduce the classification performance. The goal of ensemble pruning is to identify a subset of ensemble members that performs at least as good as the original ensemble and discard any other members. In this paper, we introduce the Collective-Agreement-based Pruning (CAP) method. Rather than ranking individual members, CAP ranks subsets by considering the individual predictive ability of each member along with the degree of redundancy among them. Subsets whose members highly agree with the class while having low inter-agreement are preferred.