Connectionist learning procedures
Artificial Intelligence
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Second-Order Learning Algorithm with Squared Penalty Term
Neural Computation
Boundedness of a batch gradient method with penalty for feedforward neural networks
MATH'07 Proceedings of the 12th WSEAS International Conference on Applied Mathematics
Boundedness and convergence of online gradient method with penalty for feedforward neural networks
IEEE Transactions on Neural Networks
When does online BP training converge?
IEEE Transactions on Neural Networks
Convergence of batch BP algorithm with penalty for FNN training
ICONIP'06 Proceedings of the 13 international conference on Neural Information Processing - Volume Part I
A theoretical comparison of batch-mode, on-line, cyclic, and almost-cyclic learning
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Convergence of Cyclic and Almost-Cyclic Learning With Momentum for Feedforward Neural Networks
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Weight decay method as one of classical complexity regularization methods is simple and appears to work well in some applications for backpropagation neural networks (BPNN). This paper shows results for the weak and strong convergence for cyclic and almost cyclic learning BPNN with penalty term (CBP-P and ACBP-P). The convergence is guaranteed under certain relaxed conditions for activation functions, learning rate and under the assumption for the stationary set of error function. Furthermore, the boundedness of the weights in the training procedure is obtained in a simple and clear way. Numerical simulations are implemented to support our theoretical results and demonstrate that ACBP-P has better performance than CBP-P on both convergence speed and generalization ability.