A fast parallel optimization for training support vector machine

  • Authors:
  • Jian-Xiong Dong;Adam Krzyżak;Ching Y. Suen

  • Affiliations:
  • Centre for Pattern Recognition and Machine Intelligence, Concordia University, Montreal, Quebec, Canada;Department of Computer Science, Concordia University, Montreal, Quebec, Canada;Centre for Pattern Recognition and Machine Intelligence, Concordia University, Montreal, Quebec, Canada

  • Venue:
  • MLDM'03 Proceedings of the 3rd international conference on Machine learning and data mining in pattern recognition
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

A fast SVM training algorithm for multi-classes consisting of parallel and sequential optimizations is presented. The main advantage of the parallel optimization step is to remove most non-support vectors quickly, which dramatically reduces the training time at the stage of sequential optimization. In addition, some strategies such as kernel caching, shrinking and calling BLAS functions are effectively integrated into the algorithm to speed up the training. Experiments on MNIST handwritten digit database have shown that, without sacrificing the generalization performance, the proposed algorithm has achieved a speed-up factor of 110, when compared with Keerthi et al.'s modified SMO. Moreover, for the first time ever we investigated the training performance of SVM on handwritten Chinese database ETL9B with more than 3000 categories and about 500,000 training samples. The total training time is just 5.1 hours. The raw error rate of 1.1% on ETL9B has been achieved.