Deterministic global optimization for FNN training

  • Authors:
  • K. -A. Toh

  • Affiliations:
  • Institute for Infocomm Research Singapore

  • Venue:
  • IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper addresses the issue of training feedforward neural networks by global optimization. The main contributions include characterization of global optimality of a network error function, and formulation of a global descent algorithm to solve the network training problem. A network with a single hidden-layer and a single-output unit is considered. By means of a monotonic transformation, a sufficient condition for global optimality of a network error function is presented. Based on this, a penalty-based algorithm is derived directing the search towards possible regions containing the global minima. Numerical comparison with benchmark problems from the neural network literature shows superiority of the proposed algorithm over some local methods, in terms of the percentage of trials attaining the desired solutions. The algorithm is also shown to be effective for several pattern recognition problems.