A novel automatic two-stage locally regularized classifier construction method using the extreme learning machine

  • Authors:
  • Dajun Du;Kang Li;George W. Irwin;Jing Deng

  • Affiliations:
  • Shanghai Key Laboratory of Power Station Automation Technology, School of Mechatronical Engineering and Automation, Shanghai University, Shanghai 200072, China and School of Electronics, Electrica ...;School of Electronics, Electrical Engineering and Computer Science, Queen's University Belfast, Belfast BT9 5 AH, UK;School of Electronics, Electrical Engineering and Computer Science, Queen's University Belfast, Belfast BT9 5 AH, UK;School of Electronics, Electrical Engineering and Computer Science, Queen's University Belfast, Belfast BT9 5 AH, UK

  • Venue:
  • Neurocomputing
  • Year:
  • 2013

Quantified Score

Hi-index 0.01

Visualization

Abstract

This paper investigates the design of a linear-in-the-parameters (LITP) regression classifier for two-class problems. Most existing algorithms generally learn a classifier (model) from the available training data based on some stopping criterions, such as the Akaike's final prediction error (FPE). The drawback here is that the classifier obtained is then not directly obtained based on its generalization capability. The main objective of this paper is to improve the sparsity and generalization capability of a classifier, while reducing the computational expense in producing it. This is achieved by proposing an automatic two-stage locally regularized classifier construction (TSLRCC) method using the extreme learning machine (ELM). In this new algorithm, the nonlinear parameters in each term, such as the width of the Gaussian function and the power of a polynomial term, are firstly determined by the ELM. An initial classifier is then generated by the direct evaluation of these candidates models according to the leave-one-out (LOO) misclassification rate in the first stage. The significance of each selected regressor term is also checked and insignificant ones are replaced in the second stage. To reduce the computational complexity, a proper regression context is defined which allows fast implementation of the proposed method. Simulation results confirm the effectiveness of the proposed technique.