Comparing biases for minimal network construction with back-propagation
Advances in neural information processing systems 1
Simplifying neural networks by soft weight-sharing
Neural Computation
Parameter convergence and learning curves for neural networks
Neural Computation
Neuro-Dynamic Programming
Gradient Convergence in Gradient methods with Errors
SIAM Journal on Optimization
Second-Order Learning Algorithm with Squared Penalty Term
Neural Computation
Convergent on-line algorithms for supervised learning in neural networks
IEEE Transactions on Neural Networks
Magnified gradient function with deterministic weight modification in adaptive learning
IEEE Transactions on Neural Networks
Deterministic convergence of an online gradient method for BP neural networks
IEEE Transactions on Neural Networks
A modified gradient-based neuro-fuzzy learning algorithm and its convergence
Information Sciences: an International Journal
Convergence of chaos injection-based batch backpropagation algorithm for feedforward neural networks
ISNN'13 Proceedings of the 10th international conference on Advances in Neural Networks - Volume Part I
Hi-index | 0.01 |
In this brief, we consider an online gradient method with penalty for training feedforward neural networks. Specifically, the penalty is a term proportional to the norm of the weights. Its roles in the method are to control the magnitude of the weights and to improve the generalization performance of the network. By proving that the weights are automatically bounded in the network training with penalty, we simplify the conditions that are required for convergence of online gradient method in literature. A numerical example is given to support the theoretical analysis.