Evolutionary undersampling for classification with imbalanced datasets: Proposals and taxonomy
Evolutionary Computation
SVMs modeling for highly imbalanced classification
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics - Special issue on human computing
Behavior detection using confidence intervals of hidden Markov models
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Empirical system learning for statistical pattern recognition with non-uniform error criteria
IEEE Transactions on Signal Processing
Batch and online learning algorithms for nonconvex neyman-pearson classification
ACM Transactions on Intelligent Systems and Technology (TIST)
ICONIP'06 Proceedings of the 13 international conference on Neural Information Processing - Volume Part I
A minimax probabilistic approach to feature transformation for multi-class data
Applied Soft Computing
A combined approach to tackle imbalanced data sets
International Journal of Hybrid Intelligent Systems
Hi-index | 0.00 |
Imbalanced learning is a challenged task in machine learning. In this context, the data associated with one class are far fewer than those associated with the other class. Traditional machine learning methods seeking classification accuracy over a full range of instances are not suitable to deal with this problem, since they tend to classify all the data into a majority class, usually the less important class. In this correspondence, the authors describe a new approach named the biased minimax probability machine (BMPM) to deal with the problem of imbalanced learning. This BMPM model is demonstrated to provide an elegant and systematic way for imbalanced learning. More specifically, by controlling the accuracy of the majority class under all possible choices of class-conditional densities with a given mean and covariance matrix, this model can quantitatively and systematically incorporate a bias for the minority class. By establishing an explicit connection between the classification accuracy and the bias, this approach distinguishes itself from the many current imbalanced-learning methods; these methods often impose a certain bias on the minority data by adapting intermediate factors via the trial-and-error procedure. The authors detail the theoretical foundation, prove its solvability, propose an efficient optimization algorithm, and perform a series of experiments to evaluate the novel model. The comparison with other competitive methods demonstrates the effectiveness of this new model