An algorithm for training a large scale support vector machine for regression based on linear programming and decomposition methods

  • Authors:
  • Pablo Rivas-Perea;Juan Cota-Ruiz

  • Affiliations:
  • Department of Computer Science, Baylor University, One Bear Place 97356. Waco, TX 76798-7356, United States;Department of Electrical and Computer Engineering, Autonomous University of Ciudad Juarez (UACJ), Ave. del Charro #450 Nte. C.P. 32310. Ciudad Juarez, Chihuahua, Mexico

  • Venue:
  • Pattern Recognition Letters
  • Year:
  • 2013

Quantified Score

Hi-index 0.10

Visualization

Abstract

This paper presents a method to train a Support Vector Regression (SVR) model for the large-scale case where the number of training samples supersedes the computational resources. The proposed scheme consists of posing the SVR problem entirely as a Linear Programming (LP) problem and on the development of a sequential optimization method based on variables decomposition, constraints decomposition, and the use of primal-dual interior point methods. Experimental results demonstrate that the proposed approach has comparable performance with other SV-based classifiers. Particularly, experiments demonstrate that as the problem size increases, the sparser the solution becomes, and more computational efficiency can be gained in comparison with other methods. This demonstrates that the proposed learning scheme and the LP-SVR model are robust and efficient when compared with other methodologies for large-scale problems.