Feasible Direction Decomposition Algorithms for Training Support Vector Machines

  • Authors:
  • Pavel Laskov

  • Affiliations:
  • Department of Computer and Information Sciences, 102 Smith Hall, University of Delaware, Newark, DE 19718, USA. laskov@cis.udel.edu

  • Venue:
  • Machine Learning
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

The article presents a general view of a class of decomposition algorithms for training Support Vector Machines (SVM) which are motivated by the method of feasible directions. The first such algorithm for the pattern recognition SVM has been proposed in Joachims, T. (1999, Schölkopf et al. (Eds.) Advances in kernel methods-Support vector learning (pp. 185–208). MIT Press). Its extension to the regression SVM—the maximal inconsistency algorithm—has been recently presented by the author (Laskov, 2000, Solla, Leen, & Müller (Eds.) Advances in neural information processing systems 12 (pp. 484–490). MIT Press). A detailed account of both algorithms is carried out, complemented by theoretical investigation of the relationship between the two algorithms. It is proved that the two algorithms are equivalent for the pattern recognition SVM, and the feasible direction interpretation of the maximal inconsistency algorithm is given for the regression SVM. The experimental results demonstrate an order of magnitude decrease of training time in comparison with training without decomposition, and, most importantly, provide experimental evidence of the linear convergence rate of the feasible direction decomposition algorithms.