An incremental approach to support vector machine learning

  • Authors:
  • Jing Jin

  • Affiliations:
  • Department of Computer Science and Engineering, Nanjing University of Aeronautics and Astronautics, China

  • Venue:
  • ISNN'12 Proceedings of the 9th international conference on Advances in Neural Networks - Volume Part I
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we proposed a novel approach for incremental support vector machine training. The original problem of SVM is a quadratic programming(QP) problem, the result of which reduces to a linear combination of training examples. This result inspires us that SVM can be viewed as a two layer neural network, the structure of the first layer of which is determined by the kernel function chosen and the training examples, and what remains mutable is coefficients and bias of the second. In our method we train the weights of support vectors and bias using the same stochastic gradient descent algorithm as perceptron training. In contrast with perceptron training, we picked the hinge loss function rather than the square of errors as the target function, since in hinge loss function correctly classified training examples has no effect on the decision surface.