Machine Learning
Feature selection for ensembles
AAAI '99/IAAI '99 Proceedings of the sixteenth national conference on Artificial intelligence and the eleventh Innovative applications of artificial intelligence conference innovative applications of artificial intelligence
Robust Classification for Imprecise Environments
Machine Learning
Information Retrieval
Tree Induction for Probability-Based Ranking
Machine Learning
SMOTE: synthetic minority over-sampling technique
Journal of Artificial Intelligence Research
No free lunch theorems for optimization
IEEE Transactions on Evolutionary Computation
Designing classifier fusion systems by genetic algorithms
IEEE Transactions on Evolutionary Computation
Evolutionary ensembles with negative correlation learning
IEEE Transactions on Evolutionary Computation
Greedy optimization classifiers ensemble based on diversity
Pattern Recognition
Enhancing multi-label music genre classification through ensemble techniques
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
Multilabel classification using heterogeneous ensemble of multi-label classifiers
Pattern Recognition Letters
Generating diverse ensembles to counter the problem of class imbalance
PAKDD'10 Proceedings of the 14th Pacific-Asia conference on Advances in Knowledge Discovery and Data Mining - Volume Part II
Improving multilabel classification performance by using ensemble of multi-label classifiers
MCS'10 Proceedings of the 9th international conference on Multiple Classifier Systems
AI'11 Proceedings of the 24th international conference on Advances in Artificial Intelligence
Hi-index | 0.00 |
Ensembles are often capable of greater predictive performance than any of their individual classifiers. Despite the need for classifiers to make different kinds of errors, the majority voting scheme, typically used, treats each classifier as though it contributed equally to the group's performance. This can be particularly limiting on unbalanced datasets, as one is more interested in complementing classifiers that can assist in improving the true positive rate without signicantly increasing the false positive rate. Therefore, we implement a genetic algorithm based framework to weight the contribution of each classifier by an appropriate fitness function, such that the classifiers that complement each other on the unbalanced dataset are preferred, resulting in significantly improved performances. The proposed framework can be built on top of any collection of classifiers with different fitness functions.