Connectionist learning procedures
Artificial Intelligence
Convergence of an online gradient method for feedforward neural networks with stochastic inputs
Journal of Computational and Applied Mathematics - Special issue on proceedings of the international symposium on computational mathematics and applications
Boundedness and convergence of online gradient method with penalty for feedforward neural networks
IEEE Transactions on Neural Networks
Expert Systems with Applications: An International Journal
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
In this paper, a penalty term is added to the conventional error function to improve the generalization of the Ridge Polynomial neural network. In order to choose appropriate learning parameters, we propose a monotonicity theorem and two convergence theorems including a weak convergence and a strong convergence for the synchronous gradient method with penalty for the neural network. The experimental results of the function approximation problem illustrate the above theoretical results are valid.