Training pi-sigma network by online gradient algorithm with penalty for small weight update

  • Authors:
  • Yan Xiong;Wei Wu;Xidai Kang;Chao Zhang

  • Affiliations:
  • Department of Applied Mathematics, Dalian University of Technology, Dalian 116024, People's Republic of China, and Faculty of Science, University of Science and Technology Liaoning, Anshan, 114051 ...;Department of Applied Mathematics, Dalian University of Technology, Dalian 116024, People's Republic of China. wuweiw@dlut.edu.cn;Department of Applied Mathematics, Dalian University of Technology, Dalian 116024, People's Republic of China. kxd_005@163.com;Department of Applied Mathematics, Dalian University of Technology, Dalian 116024, People's Republic of China. zhangchao_fox@163.com

  • Venue:
  • Neural Computation
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

A pi-sigma network is a class of feedforward neural networks with product units in the output layer. An online gradient algorithm is the simplest and most often used training method for feedforward neural networks. But there arises a problem when the online gradient algorithm is used for pi-sigma networks in that the update increment of the weights may become very small, especially early in training, resulting in a very slow convergence. To overcome this difficulty, we introduce an adaptive penalty term into the error function, so as to increase the magnitude of the update increment of the weights when it is too small. This strategy brings about faster convergence as shown by the numerical experiments carried out in this letter.