Connectionist learning procedures
Artificial Intelligence
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations
Neural networks and the bias/variance dilemma
Neural Computation
Structural learning with forgetting
Neural Networks
A penalty-function approach for pruning feedforward neural networks
Neural Computation
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Deterministic convergence of an online gradient method for neural networks
Journal of Computational and Applied Mathematics - Selected papers of the international symposium on applied mathematics, August 2000, Dalian, China
Convergence of an online gradient method for feedforward neural networks with stochastic inputs
Journal of Computational and Applied Mathematics - Special issue on proceedings of the international symposium on computational mathematics and applications
ICIC '07 Proceedings of the 3rd International Conference on Intelligent Computing: Advanced Intelligent Computing Theories and Applications. With Aspects of Artificial Intelligence
A modified gradient-based neuro-fuzzy learning algorithm and its convergence
Information Sciences: an International Journal
Deterministic convergence of an online gradient method with momentum
ICIC'06 Proceedings of the 2006 international conference on Intelligent Computing - Volume Part I
Convergence of batch BP algorithm with penalty for FNN training
ICONIP'06 Proceedings of the 13 international conference on Neural Information Processing - Volume Part I
Stability of steepest descent with momentum for quadratic functions
IEEE Transactions on Neural Networks
Deterministic convergence of an online gradient method for BP neural networks
IEEE Transactions on Neural Networks
Convergence of gradient method with momentum for two-Layer feedforward neural networks
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
In this paper, we study the convergence of an online gradient method with inner-product penalty and adaptive momentum for feedforward neural networks, assuming that the training samples are permuted stochastically in each cycle of iteration. Both two-layer and three-layer neural network models are considered, and two convergence theorems are established. Sufficient conditions are proposed to prove weak and strong convergence results. The algorithm is applied to the classical two-spiral problem and identification of Gabor function problem to support these theoretical findings.