The multi-phase method in fast learning algorithms

  • Authors:
  • Chi-Chung Cheung;Sin-Chun Ng

  • Affiliations:
  • Department of Electronic and Information Engineering, Hong Kong Polytechnic University, Hong Kong, China; 

  • Venue:
  • IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Backpropagation (BP) learning algorithm is the most widely supervised learning technique which is extensively applied in the training of multi-layer feed-forward neural networks. Many modifications that have been proposed to improve the performance of BP have focused on solving the "flat spot" problem to increase the convergence rate. However, their performance is limited due to the error overshooting problem. In [20], a novel approach called BP with Two-Phase Magnified Gradient Function (2P-MGFPROP) was introduced to overcome the error overshooting problem and hence speed up the convergence rate of MGFPROP. In the paper, this approach is further enhanced by proposing to divide the learning process into multiple phases, and different fast learning algorithms are assigned in different adaptive problems. Through the performance investigation, it is found that the convergence rate can be increased up to two times, compared with existing fast learning algorithms.