Pruning Support Vector Machines Without Altering Performances

  • Authors:
  • Xun Liang;Rong-Chang Chen;Xinyu Guo

  • Affiliations:
  • Inst. of Comput. Sci. & Technol., Peking Univ., Beijing;-;-

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Support vector machines (SV machines, SVMs) have many merits that distinguish themselves from many other machine-learning algorithms, such as the nonexistence of local minima, the possession of the largest distance from the separating hyperplane to the SVs, and a solid theoretical foundation. However, SVM training algorithms such as the efficient sequential minimal optimization (SMO) often produce many SVs. Some scholars have found that the kernel outputs are frequently of similar levels, which insinuate the redundancy of SVs. By analyzing the overlapped information of kernel outputs, a succinct separating-hyperplane-securing method for pruning the dispensable SVs based on crosswise propagation (CP) is systematically developed. The method also circumvents the problem of explicitly discerning SVs in feature space as the SVM formulation does. Experiments with the famous SMO-based software LibSVM reveal that all typical kernels with different parameters on the data sets contribute the dispensable SVs. Some 1% ~ 9% (in some scenarios, more than 50%) dispensable SVs are found. Furthermore, the experimental results also verify that the pruning method does not alter the SVMs' performances at all. As a corollary, this paper further contributes in theory a new lower upper bound on the number of SVs in the high-dimensional feature space.