Connectionist learning procedures
Artificial Intelligence
Structural learning with forgetting
Neural Networks
A penalty-function approach for pruning feedforward neural networks
Neural Computation
Neutral Networks in Optimization
Neutral Networks in Optimization
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Second-Order Learning Algorithm with Squared Penalty Term
Neural Computation
Boundedness of a batch gradient method with penalty for feedforward neural networks
MATH'07 Proceedings of the 12th WSEAS International Conference on Applied Mathematics
Hi-index | 0.00 |
Penalty methods have been commonly used to improve the generalization performance of feedforward neural networks and to control the magnitude of the network weights. Weight boundedness and convergence results are presented for the batch BP algorithm with penalty for training feedforward neural networks with a hidden layer. A key point of the proofs is the monotonicity of the error function with the penalty term during the training iteration. A relationship between the learning rate parameter and the penalty parameter is proposed to guarantee the convergence. The algorithm is applied to two classification problems to support our theoretical findings.