An incremental learning algorithm for Lagrangian support vector machines

  • Authors:
  • Hua Duan;Xiaojian Shao;Weizhen Hou;Guoping He;Qingtian Zeng

  • Affiliations:
  • College of Information Science and Engineering, Shandong University of Science and Technology, Qingdao 266510, PR China;College of Science, China Agricultural University, Beijing 100083, PR China;College of Information Science and Engineering, Shandong University of Science and Technology, Qingdao 266510, PR China;College of Information Science and Engineering, Shandong University of Science and Technology, Qingdao 266510, PR China;College of Information Science and Engineering, Shandong University of Science and Technology, Qingdao 266510, PR China

  • Venue:
  • Pattern Recognition Letters
  • Year:
  • 2009

Quantified Score

Hi-index 0.10

Visualization

Abstract

Incremental learning has attracted more and more attention recently, both in theory and application. In this paper, the incremental learning algorithms for Lagrangian support vector machine (LSVM) are proposed. LSVM is an improvement to the standard linear SVM for classifications, which leads to the minimization of an unconstrained differentiable convex programming. The solution to this programming is obtained by an iteration scheme with a simple linear convergence. The inversion of the matrix in the solving algorithm is converted to the order of the original input space's dimensionality plus one at the beginning of the algorithm. The algorithm uses the Sherman-Morrison-Woodbury identity to reduce the computation time. The incremental learning algorithms for LSVM presented in this paper include two cases that are namely online and batch incremental learning. Because the inversion of the matrix after increment is solved based on the previous computed information, it is unnecessary to repeat the computing process. Experimental results show that the algorithms are superior to others.