Connectionist learning procedures
Artificial Intelligence
Bayesian regularization and pruning using a Laplace prior
Neural Computation
Structural learning with forgetting
Neural Networks
A penalty-function approach for pruning feedforward neural networks
Neural Computation
Iterative solution of nonlinear equations in several variables
Iterative solution of nonlinear equations in several variables
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Convergence of an online gradient method for feedforward neural networks with stochastic inputs
Journal of Computational and Applied Mathematics - Special issue on proceedings of the international symposium on computational mathematics and applications
Second-Order Learning Algorithm with Squared Penalty Term
Neural Computation
Deterministic convergence of an online gradient method for BP neural networks
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Online gradient algorithm has been widely used as a learning algorithm for feedforward neural networks training. Penalty is a common and popular method for improving the generalization performance of networks. In this paper, a convergence theorem is proved for the online gradient learning algorithm with penalty, a term proportional to the magnitude of the weights. The monotonicity of the error function with such a penalty term is guaranteed during the training iteration. A key point of the proofs is the boundedness of the network weights, which is also a desired rewarding of adding penalty.