Accelerating kernel perceptron learning

  • Authors:
  • Daniel García;Ana González;José R. Dorronsoro

  • Affiliations:
  • Dpto. de Ingeniería Informática and Instituto de Ingeniería del Conocimiento, Universidad Autónoma de Madrid, Madrid, Spain;Dpto. de Ingeniería Informática and Instituto de Ingeniería del Conocimiento, Universidad Autónoma de Madrid, Madrid, Spain;Dpto. de Ingeniería Informática and Instituto de Ingeniería del Conocimiento, Universidad Autónoma de Madrid, Madrid, Spain

  • Venue:
  • ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recently it has been shown that appropriate perceptron training methods, such as the Schlesinger-Kozinec (SK) algorithm, can provide maximal margin hyperplanes with training costs O(N ×T), with N denoting sample size and T the number of training iterations. In this work we shall relate SK training with the classical Rosenblatt rule and show that, when the hyperplane vector is written in dual form, the support vector (SV) coefficients determine their training appearance frequency; in particular, large coefficient SVs penalize training costs. Under this light we shall explore a training acceleration procedure in which large coefficient and, hence, large cost SVs are removed from training and that allows for a further stable large sample shrinking. As we shall see, this results in a much faster training while not penalizing test classification.