Multilayer perceptron and neural networks

  • Authors:
  • Marius-Constantin Popescu;Valentina E. Balas;Liliana Perescu-Popescu;Nikos Mastorakis

  • Affiliations:
  • Faculty of Electromechanical and Environmental Engineering, University of Craiova, Romania;Faculty of Engineering, "Aurel Vlaicu" University of Arad, Romania;"Elena Cuza" College of Craiova, Romania;Technical University of Sofia, Bulgaria

  • Venue:
  • WSEAS Transactions on Circuits and Systems
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

The attempts for solving linear inseparable problems have led to different variations on the number of layers of neurons and activation functions used. The backpropagation algorithm is the most known and used supervised learning algorithm. Also called the generalized delta algorithm because it expands the training way of the adaline network, it is based on minimizing the difference between the desired output and the actual output, through the downward gradient method (the gradient tells us how a function varies in different directions). Training a multilayer perceptron is often quite slow, requiring thousands or tens of thousands of epochs for complex problems. The best known methods to accelerate learning are: the momentum method and applying a variable learning rate. The paper presents the possibility to control the induction driving using neural systems.