An algorithm of supervised learning for multilayer neural networks

  • Authors:
  • Zheng Tang;XuGang Wang;Hiroki Tamura;Masahiro Ishii

  • Affiliations:
  • Faculty of Engineering, Toyama University, 3190 Gofuku Toyama 930-8555, Japan;Faculty of Engineering, Toyama University, 3190 Gofuku Toyama 930-8555, Japan;Faculty of Engineering, Toyama University, 3190 Gofuku Toyama 930-8555, Japan;Faculty of Engineering, Toyama University, 3190 Gofuku Toyama 930-8555, Japan

  • Venue:
  • Neural Computation
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

A method of supervised learning for multilayer artificial neural networks to escape local minima is proposed. The learning model has two phases: a backpropagation phase and a gradient ascent phase. The backpropagation phase performs steepest descent on a surface in weight space whose height at any point in weight space is equal to an error measure, and it finds a set of weights minimizing this error measure. When the backpropagation gets stuck in local minima, the gradient ascent phase attempts to fill up the valley by modifying gain parameters in a gradient ascent direction of the error measure. The two phases are repeated until the network gets out of local minima. The algorithm has been tested on benchmark problems, such as exclusive-or (XOR), parity, alphabetic characters learning, Arabic numerals with a noise recognition problem, and a realistic real-world problem: classification of radar returns from the ionosphere. For all of these problems, the systems are shown to be capable of escaping from the backpropagation local minima and converge faster when using the new proposed method than using the simulated annealing techniques.