Comparing Support Vector Machines and Feedforward Neural Networks With Similar Hidden-Layer Weights

  • Authors:
  • E. Romero;D. Toppo

  • Affiliations:
  • Univ. Politecnica de Catalunya, Barcelona;-

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 2007

Quantified Score

Hi-index 0.01

Visualization

Abstract

Support vector machines (SVMs) usually need a large number of support vectors to form their output. Recently, several models have been proposed to build SVMs with a small number of basis functions, maintaining the property that their hidden-layer weights are a subset of the data (the support vectors). This property is also present in some algorithms for feedforward neural networks (FNNs) that construct the network sequentially, leading to sparse models where the number of hidden units can be explicitly controlled. An experimental study on several benchmark data sets, comparing SVMs and the aforementioned sequential FNNs, was carried out. The experiments were performed in the same conditions for all the models, and they can be seen as a comparison of SVMs and FNNs when both models are restricted to use similar hidden-layer weights. Accuracies were found to be very similar. Regarding the number of support vectors, sequential FNNs constructed models with less hidden units than standard SVMs and in the same range as "sparse" SVMs. Computational times were lower for SVMs