A gradual training algorithm of incremental support vector machine learning

  • Authors:
  • Jian-Pei Zhang;Zhong-Wei Li;Jing Yang;Yuan Li

  • Affiliations:
  • College of Computer Science and Technology, Harbin Engineering University, Harbin, China;College of Computer Science and Technology, Harbin Engineering University, Harbin, China;College of Computer Science and Technology, Harbin Engineering University, Harbin, China;College of Computer Science and Technology, Harbin Engineering University, Harbin, China

  • Venue:
  • ICNC'05 Proceedings of the First international conference on Advances in Natural Computation - Volume Part I
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Support Vector Machine(SVM) has become a popular tool for learning with large amounts of high dimensional data, but sometimes we prefer to incremental learning algorithms to handle very vast data for training SVM is very costly in time and memory consumption or because the data available are obtained at different intervals. For its outstanding power to summarize the data space in a concise way, incremental SVM framework is designed to deal with large-scale learning problems. This paper proposes a gradual algorithm for training SVM to incremental learning in a dividable way, taking the possible impact of new training data to history data each other into account. Training data are divided and combined in a crossed way to collect support vectors, and being divided into smaller sets makes it easier to decreases the computation complexity and the gradual process can be trained in a parallel way. The experiment results on test dataset show that the classification accuracy using proposed incremental algorithm is superior to that using batch SVM model, the parallel training method is effective to decrease the training time consumption.