Pruning error minimization in least squares support vector machines

  • Authors:
  • B. J. de Kruif;T. J.A. de Vries

  • Affiliations:
  • Drebbel Inst. for Mechatronics, Univ. of Twente, Enschede, Netherlands;-

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 2003

Quantified Score

Hi-index 0.01

Visualization

Abstract

The support vector machine (SVM) is a method for classification and for function approximation. This method commonly makes use of an ε-insensitive cost function, meaning that errors smaller than ε remain unpunished. As an alternative, a least squares support vector machine (LSSVM) uses a quadratic cost function. When the LSSVM method is used for function approximation, a nonsparse solution is obtained. The sparseness is imposed by pruning, i.e., recursively solving the approximation problem and subsequently omitting data that has a small error in the previous pass. However, omitting data with a small approximation error in the previous pass does not reliably predict what the error will be after the sample has been omitted. In this paper, a procedure is introduced that selects from a data set the training sample that will introduce the smallest approximation error when it will be omitted. It is shown that this pruning scheme outperforms the standard one.