Zero Norm Least Squares Proximal SVR

  • Authors:
  • Jayadeva;Sameena Shah;Suresh Chandra

  • Affiliations:
  • Dept. of Electrical Engineering,;Dept. of Electrical Engineering,;Dept. of Mathematics, Indian Institute of Technology, New Delhi, India 110016

  • Venue:
  • PReMI '09 Proceedings of the 3rd International Conference on Pattern Recognition and Machine Intelligence
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Least Squares Proximal Support Vector Regression (LSPSVR) requires only a single matrix inversion to obtain the Lagrange Multipliers as opposed to solving a Quadratic Programming Problem (QPP) for the conventional SVM optimization problem. However, like other least squares based methods, LSPSVR suffers from lack of sparseness. Most of the Lagrange multipliers are non-zero and thus the determination of the separating hyperplane requires a large number of data points. Large zero norm of Lagrange multipliers inevitably leads to a large kernel matrix that is inappropriate for fast regression on large datasets. This paper suggests how the LSPSVR formulation may be recast into one that also tries to minimize the zero norm of the vector of Lagrange multipliers, and in effect imposes sparseness. Experimental results on benchmark data show that a significant decrease in the number of support vectors can be achieved without a concomitant increase in the error.