Perceptron training algorithms designed using discrete-time control Liapunov functions

  • Authors:
  • Oumar Diene;Amit Bhaya

  • Affiliations:
  • Center of Engineering, Modeling and Applied Social Sciences, CECS/Federal University of ABC, Rua Catequese 242, 09090-400 Santo Andre, SP, Brazil;Department of Electrical Engineering, NACAD-COPPE/Federal University of Rio de Janeiro, P.O. Box 68504, 21945-970 Rio de Janeiro, RJ, Brazil

  • Venue:
  • Neurocomputing
  • Year:
  • 2009

Quantified Score

Hi-index 0.01

Visualization

Abstract

Perceptrons, proposed in the seminal paper McCulloch-Pitts of 1943, have remained of interest to neural network community because of their simplicity and usefulness in classifying linearly separable data and can be viewed as implementing iterative procedures for ''solving'' linear inequalities. Gradient descent and conjugate gradient methods, normally used for linear equalities, can be used to solve linear inequalities by simple modifications that have been proposed in the literature but not been analyzed completely. This paper applies a recently proposed control-inspired approach to the design of iterative steepest descent and conjugate gradient algorithms for perceptron training in batch mode, by regarding certain parameters of the training/algorithm as controls and then using a control Liapunov technique to choose appropriate values of these parameters.