Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Neural and Adaptive Systems: Fundamentals through Simulations with CD-ROM
Neural and Adaptive Systems: Fundamentals through Simulations with CD-ROM
Artificial Intelligence: A Guide to Intelligent Systems
Artificial Intelligence: A Guide to Intelligent Systems
Acceleration Techniques for the Backpropagation Algorithm
Proceedings of the EURASIP Workshop 1990 on Neural Networks
Fuzzy control algorithm implementation using LabWindows
WSEAS TRANSACTIONS on SYSTEMS
Equilibrium dynamic systems intelligence
WSEAS Transactions on Information Science and Applications
Equilibrium dynamic systems integration
ICAI'09 Proceedings of the 10th WSEAS international conference on Automation & information
Weight-decay regularization in reproducing Kernel Hilbert spaces by variable-basis schemes
WSEAS Transactions on Mathematics
Fuzzy control for pumps drivings
CI'10 Proceedings of the 4th WSEAS international conference on Computational intelligence
Hi-index | 0.00 |
The attempts for solving linear inseparable problems have led to different variations on the number of layers of neurons and activation functions used. The backpropagation algorithm is the most known and used supervised learning algorithm. Also called the generalized delta algorithm because it expands the training way of the adaline network, it is based on minimizing the difference between the desired output and the actual output, through the downward gradient method (the gradient tells us how a function varies in different directions). Training a multilayer perceptron is often quite slow, requiring thousands or tens of thousands of epochs for complex problems. The best known methods to accelerate learning are: the momentum method and applying a variable learning rate. The paper presents the possibility to control the induction driving using neural systems.