Letters: Convergence of gradient method with penalty for Ridge Polynomial neural network

  • Authors:
  • Xin Yu;Qingfeng Chen

  • Affiliations:
  • School of Computer, Electronics and Information, Guangxi University, Nanning 53004, China;School of Computer, Electronics and Information, Guangxi University, Nanning 53004, China

  • Venue:
  • Neurocomputing
  • Year:
  • 2012

Quantified Score

Hi-index 0.01

Visualization

Abstract

In this paper, a penalty term is added to the conventional error function to improve the generalization of the Ridge Polynomial neural network. In order to choose appropriate learning parameters, we propose a monotonicity theorem and two convergence theorems including a weak convergence and a strong convergence for the synchronous gradient method with penalty for the neural network. The experimental results of the function approximation problem illustrate the above theoretical results are valid.