Letters: Fast error estimation for efficient support vector machine growing

  • Authors:
  • A. Navia-Vázquez;R. Díaz-Morales

  • Affiliations:
  • DTSC, Univ. Carlos III de Madrid, Avda Universidad 30, 28911-Leganés, Madrid, Spain;DTSC, Univ. Carlos III de Madrid, Avda Universidad 30, 28911-Leganés, Madrid, Spain

  • Venue:
  • Neurocomputing
  • Year:
  • 2010

Quantified Score

Hi-index 0.01

Visualization

Abstract

Support vector machines (SVMs) have become an off-the-shelf solution to solve many machine learning tasks but, unfortunately, the size of the resulting machines is quite often exceedingly large, which hampers their use in those practical applications demanding extremely fast response. Some methods exist to prune the models after training, but a full SVM model needs to be trained first, which usually represents a large computational cost. Furthermore, the reduction algorithms are prone to fall in local minima and also represent an additional non-negligible computational cost. Alternative procedures based on incrementally growing a semiparametric model provide a good compromise between complexity, machine size and performance. We investigate here the potential benefits of a fast error estimation (FEE) mechanism to improve the semiparametric SVM growing process. Precisely, we propose to use the FEE method to identify the best node to be added to the model in every growing step, by selecting the candidate with the lowest cross-validation error. We evaluate the proposed approach by evaluating the performance of the algorithm in benchmarks with real world datasets from the UCI machine learning repository.