Localized twin SVM via convex minimization

  • Authors:
  • Qiaolin Ye;Chunxia Zhao;Ning Ye;Xiaobo Chen

  • Affiliations:
  • School of Computer Science and Technology, Nanjing University of Science and Technology, Nanjing, China and School of Information technology, Nanjing Forestry University, Nanjing, China;School of Computer Science and Technology, Nanjing University of Science and Technology, Nanjing, China;School of Information technology, Nanjing Forestry University, Nanjing, China;School of Computer Science and Technology, Nanjing University of Science and Technology, Nanjing, China

  • Venue:
  • Neurocomputing
  • Year:
  • 2011

Quantified Score

Hi-index 0.01

Visualization

Abstract

Multisurface proximal support vector machine via generalized eigenvalues (GEPSVM), being an effective classification tool for supervised learning, tries to seek two nonparallel planes that are determined by solving two generalized eigenvalue problems (GEPs). The GEPs may lead to an instable classification performance, due to matrix singularity. Proximal support vector machine using local information (LIPSVM), as a variant of GEPSVM, attempts to avoid the above shortcoming through adopting a similar formulation to the Maximum Margin Criterion (MMC). The solution to an LIPSVM follows directly from solving two standard eigenvalue problems. Actually, an LIPSVM can be viewed as a reduced algorithm, because it uses the selectively generated points to train the classifier. A major advantage of an LIPSVM is that it is resistant to outliers. In this paper, following the geometric intuition of an LIPSVM, a novel multi-plane learning approach called Localized Twin SVM via Convex Minimization (LCTSVM) is proposed. This approach determines two nonparallel planes by solving two newly formed SVM-type problems. In addition to keeping the superior characteristics of an LIPSVM, an LCTSVM still has its additional edges: (1) it has similar or better classification capability compared to LIPSVM, TWSVM and LSTSVM; (2) each plane is generated from a quadratic programming problem (QPP) instead of a special convex difference optimization arising from an LIPSVM; (3) the solution can be reduced to solving two systems of linear equations, resulting in considerably lesser computational cost; and (4) it can find the global minimum. Experiments carried out on both toy and real-world problems disclose the effectiveness of an LCTSVM.