A linear learning method for multilayer perceptrons using least-squares

  • Authors:
  • Bertha Guijarro-Berdiñas;Oscar Fontenla-Romero;Beatriz Pérez-Sánchez;Paula Fraguela

  • Affiliations:
  • Department of Computer Science, Facultad de Informática, Universidad de A Coruña, A Coruña, Spain;Department of Computer Science, Facultad de Informática, Universidad de A Coruña, A Coruña, Spain;Department of Computer Science, Facultad de Informática, Universidad de A Coruña, A Coruña, Spain;Department of Computer Science, Facultad de Informática, Universidad de A Coruña, A Coruña, Spain

  • Venue:
  • IDEAL'07 Proceedings of the 8th international conference on Intelligent data engineering and automated learning
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Training multilayer neural networks is typically carried out using gradient descent techniques. Ever since the brilliant backpropagation (BP), the first gradient-based algorithm proposed by Rumelhart et al., novel training algorithms have appeared to become better several facets of the learning process for feed-forward neural networks. Learning speed is one of these. In this paper, a learning algorithm that applies linear-least-squares is presented. We offer the theoretical basis for the method and its performance is illustrated by its application to several examples in which it is compared with other learning algorithms and well known data sets. Results show that the new algorithm upgrades the learning speed of several backpropagation algorithms, while preserving good optimization accuracy. Due to its performance and low computational cost it is an interesting alternative, even for second order methods, particularly when dealing large networks and training sets.