Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations
Backpropagation: theory, architectures, and applications
Backpropagation: theory, architectures, and applications
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Summed Weight Neuron Perturbation: An O(N) Improvement Over Weight Perturbation
Advances in Neural Information Processing Systems 5, [NIPS Conference]
A Fast Stochastic Error-Descent Algorithm for Supervised Learning and Optimization
Advances in Neural Information Processing Systems 5, [NIPS Conference]
A Parallel Gradient Descent Method for Learning in Analog VLSI Neural Networks
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Artificial Intelligence: A Modern Approach
Artificial Intelligence: A Modern Approach
Stochastic correlative learning algorithms
IEEE Transactions on Signal Processing
IEEE Transactions on Neural Networks
Training feedforward networks with the Marquardt algorithm
IEEE Transactions on Neural Networks
Alopex-based evolutionary algorithm and its application to reaction kinetic parameter estimation
Computers and Industrial Engineering
Hi-index | 0.01 |
In this article we study artificial neural network training under the following two conditions: (a) the training algorithm must not rely on direct computation of gradients and (b) the algorithm must be efficient in training on-line. We review various relevant algorithms that are currently available in the literature and we propose a new algorithm that is further improved with respect to the second condition. We test and compare these algorithms by using commonly used benchmark problems in the literature and compare their efficiency against the popular backpropagation algorithm. Also, we introduce a realistic problem incorporating a robotic elbow manipulator and continue testing the algorithms against this problem.