Boosting weighted ELM for imbalanced learning

  • Authors:
  • Kuan Li;Xiangfei Kong;Zhi Lu;Liu Wenyin;Jianping Yin

  • Affiliations:
  • -;-;-;-;-

  • Venue:
  • Neurocomputing
  • Year:
  • 2014

Quantified Score

Hi-index 0.01

Visualization

Abstract

Extreme learning machine (ELM) for single-hidden-layer feedforward neural networks (SLFN) is a powerful machine learning technique, and has been attracting attentions for its fast learning speed and good generalization performance. Recently, a weighted ELM is proposed to deal with data with imbalanced class distribution. The key essence of weighted ELM is that each training sample is assigned with an extra weight. Although some empirical weighting schemes were provided, how to determine better sample weights remains an open problem. In this paper, we proposed a Boosting weighted ELM, which embedded weighted ELM seamlessly into a modified AdaBoost framework, to solve the above problem. Intuitively, the distribution weights in AdaBoost framework, which reflect importance of training samples, are input into weighted ELM as training sample weights. Furthermore, AdaBoost is modified in two aspects to be more effective for imbalanced learning: (i) the initial distribution weights are set to be asymmetric so that AdaBoost converges at a faster speed; (ii) the distribution weights are updated separately for different classes to avoid destroying the distribution weights asymmetry. Experimental results on 16 binary datasets and 5 multiclass datasets from KEEL repository show that the proposed method could achieve more balanced results than weighted ELM.