Automatically countering imbalance and its empirical relationship to cost

  • Authors:
  • Nitesh V. Chawla;David A. Cieslak;Lawrence O. Hall;Ajay Joshi

  • Affiliations:
  • Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, USA 46556;Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, USA 46556;Department of Computer Science and Engineering, University of South Florida, Tampa, USA 33620-5399;Department of Computer Science and Engineering, University of South Florida, Tampa, USA 33620-5399

  • Venue:
  • Data Mining and Knowledge Discovery
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Learning from imbalanced data sets presents a convoluted problem both from the modeling and cost standpoints. In particular, when a class is of great interest but occurs relatively rarely such as in cases of fraud, instances of disease, and regions of interest in large-scale simulations, there is a correspondingly high cost for the misclassification of rare events. Under such circumstances, the data set is often re-sampled to generate models with high minority class accuracy. However, the sampling methods face a common, but important, criticism: how to automatically discover the proper amount and type of sampling? To address this problem, we propose a wrapper paradigm that discovers the amount of re-sampling for a data set based on optimizing evaluation functions like the f-measure, Area Under the ROC Curve (AUROC), cost, cost-curves, and the cost dependent f-measure. Our analysis of the wrapper is twofold. First, we report the interaction between different evaluation and wrapper optimization functions. Second, we present a set of results in a cost- sensitive environment, including scenarios of unknown or changing cost matrices. We also compared the performance of the wrapper approach versus cost-sensitive learning methods--MetaCost and the Cost-Sensitive Classifiers--and found the wrapper to outperform the cost-sensitive classifiers in a cost-sensitive environment. Lastly, we obtained the lowest cost per test example compared to any result we are aware of for the KDD-99 Cup intrusion detection data set.