Parameter convergence and learning curves for neural networks
Neural Computation
Deterministic convergence of an online gradient method for neural networks
Journal of Computational and Applied Mathematics - Selected papers of the international symposium on applied mathematics, August 2000, Dalian, China
Convergence of an online gradient method for feedforward neural networks with stochastic inputs
Journal of Computational and Applied Mathematics - Special issue on proceedings of the international symposium on computational mathematics and applications
Boundedness and convergence of online gradient method with penalty for feedforward neural networks
IEEE Transactions on Neural Networks
When does online BP training converge?
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Deterministic convergence of an online gradient method for BP neural networks
IEEE Transactions on Neural Networks
Convergence of an online gradient method for BP neural networks with stochastic inputs
ICNC'05 Proceedings of the First international conference on Advances in Natural Computation - Volume Part I
Convergence of chaos injection-based batch backpropagation algorithm for feedforward neural networks
ISNN'13 Proceedings of the 10th international conference on Advances in Neural Networks - Volume Part I
Adaptive control based on IF-THEN rules for grasping force regulation with unknown contact mechanism
Robotics and Computer-Integrated Manufacturing
Hi-index | 0.00 |
This paper considers a class of online gradient learning methods for backpropagation (BP) neural networks with a single hidden layer. We assume that in each training cycle, each sample in the training set is supplied in a stochastic order to the network exactly once. It is interesting that these stochastic learning methods can be shown to be deterministically convergent. This paper presents some weak and strong convergence results for the learning methods, indicating that the gradient of the error function goes to zero and the weight sequence goes to a fixed point, respectively. The conditions on the activation function and the learning rate to guarantee the convergence are relaxed compared with the existing results. Our convergence results are valid for not only S-S type neural networks (both the output and hidden neurons are Sigmoid functions), but also for P-P, P-S and S-P type neural networks, where S and P represent Sigmoid and polynomial functions, respectively.