A reliable resilient backpropagation method with gradient ascent

  • Authors:
  • Xugang Wang;Hongan Wang;Guozhong Dai;Zheng Tang

  • Affiliations:
  • Intelligence Engineering Laboratory, Institute of Software, The Chinese Academy of Sciences, Beijing, China;Intelligence Engineering Laboratory, Institute of Software, The Chinese Academy of Sciences, Beijing, China;Intelligence Engineering Laboratory, Institute of Software, The Chinese Academy of Sciences, Beijing, China;Faculty of Engineering, Toyama University, Toyama-shi, Japan

  • Venue:
  • ICIC'06 Proceedings of the 2006 international conference on Intelligent computing: Part II
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

While the Resilient Backpropagation (RPROP) method can be extremely fast in converging to a solution, it suffers from the local minima problem. In this paper, a fast and reliable learning algorithm for multi-layer artificial neural networks is proposed. The learning model has two phases: the RPROP phase and the gradient ascent phase. The repetition of two phases can help the network get out of local minima. The proposed algorithm is tested on some benchmark problems. For all the above problems, the systems are shown to be capable of escaping from the local minima and converge faster than the Backpropagation with momentum algorithm and the simulated annealing techniques.