Evolution strategies based adaptive Lp LS-SVM

  • Authors:
  • Liwei Wei;Zhenyu Chen;Jianping Li

  • Affiliations:
  • China National Institute of Standardization, Beijing 100088, China;School of Business Administration, Northeastern University, Shenyang 110819, China;Institute of Policy and Management, Chinese Academy of Sciences, Beijing 100080, China

  • Venue:
  • Information Sciences: an International Journal
  • Year:
  • 2011

Quantified Score

Hi-index 0.07

Visualization

Abstract

Not only different databases but two classes of data within a database can also have different data structures. SVM and LS-SVM typically minimize the empirical @f-risk; regularized versions subject to fixed penalty (L"2 or L"1 penalty) are non-adaptive since their penalty forms are pre-determined. They often perform well only for certain types of situations. For example, LS-SVM with L"2 penalty is not preferred if the underlying model is sparse. This paper proposes an adaptive penalty learning procedure called evolution strategies (ES) based adaptive L"p least squares support vector machine (ES-based L"p LS-SVM) to address the above issue. By introducing multiple kernels, a L"p penalty based nonlinear objective function is derived. The iterative re-weighted minimal solver (IRMS) algorithm is used to solve the nonlinear function. Then evolution strategies (ES) is used to solve the multi-parameters optimization problem. Penalty parameterp, kernel and regularized parameters are adaptively selected by the proposed ES-based algorithm in the process of training the data, which makes it easier to achieve the optimal solution. Numerical experiments are conducted on two artificial data sets and six real world data sets. The experiment results show that the proposed procedure offer better generalization performance than the standard SVM, the LS-SVM and other improved algorithms.