Connectionist learning procedures
Artificial Intelligence
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations
Neural networks and the bias/variance dilemma
Neural Computation
A penalty-function approach for pruning feedforward neural networks
Neural Computation
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Deterministic convergence of an online gradient method for neural networks
Journal of Computational and Applied Mathematics - Selected papers of the international symposium on applied mathematics, August 2000, Dalian, China
ICIC '07 Proceedings of the 3rd International Conference on Intelligent Computing: Advanced Intelligent Computing Theories and Applications. With Aspects of Artificial Intelligence
Deterministic convergence of an online gradient method with momentum
ICIC'06 Proceedings of the 2006 international conference on Intelligent Computing - Volume Part I
Stability of steepest descent with momentum for quadratic functions
IEEE Transactions on Neural Networks
Convergence of gradient method with momentum for two-Layer feedforward neural networks
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Convergence of chaos injection-based batch backpropagation algorithm for feedforward neural networks
ISNN'13 Proceedings of the 10th international conference on Advances in Neural Networks - Volume Part I
Hi-index | 0.01 |
In this paper, the deterministic convergence of an online gradient method with penalty and momentum is investigated for training two-layer feedforward neural networks. The monotonicity of the new error function with the penalty term in the training iteration is firstly proved. Under this conclusion, we show that the weights are uniformly bounded during the training process and the algorithm is deterministically convergent. Sufficient conditions are also provided for both weak and strong convergence results.