Selective Pre-processing of Imbalanced Data for Improving Classification Performance
DaWaK '08 Proceedings of the 10th international conference on Data Warehousing and Knowledge Discovery
IEEE Transactions on Knowledge and Data Engineering
Ensembles of Abstaining Classifiers Based on Rule Sets
ISMIS '09 Proceedings of the 18th International Symposium on Foundations of Intelligent Systems
SMOTE: synthetic minority over-sampling technique
Journal of Artificial Intelligence Research
On combined classifiers, rule induction and rough sets
Transactions on rough sets VI
Classifying severely imbalanced data
Canadian AI'11 Proceedings of the 24th Canadian conference on Advances in artificial intelligence
Artificial Intelligence in Medicine
BRACID: a comprehensive approach to learning rules from imbalanced data
Journal of Intelligent Information Systems
Cost-sensitive decision tree ensembles for effective imbalanced classification
Applied Soft Computing
IIvotes ensemble for imbalanced data
Intelligent Data Analysis - Combined Learning Methods and Mining Complex Data
Hi-index | 0.00 |
In the paper we present a new framework for improving classifiers learned from imbalanced data. This framework integrates the SPIDER method for selective data pre-processing with the Ivotes ensemble. The goal of such integration is to obtain improved balance between the sensitivity and specificity for the minority class in comparison to a single classifier combined with SPIDER, and to keep overall accuracy on a similar level. The IIvotes framework was evaluated in a series of experiments, in which we tested its performance with two types of component classifiers (tree- and rule-based). The results show that IIvotes improves evaluation measures. They demonstrated advantages of the abstaining mechanism (i.e., refraining from predictions by component classifiers) in IIvotes rule ensembles.