Continuous Function Optimisation via Gradient Descent on a Neural Network Approximation Function

  • Authors:
  • Kate A. Smith;Jatinder N. D. Gupta

  • Affiliations:
  • -;-

  • Venue:
  • IWANN '01 Proceedings of the 6th International Work-Conference on Artificial and Natural Neural Networks: Connectionist Models of Neurons, Learning Processes and Artificial Intelligence-Part I
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

Existing neural network approaches to optimisation problems are quite limited in the types of optimisation problems that can be solved. Convergence theorems that utilise Liapunov functions limit the applicability of these techniques to minimising usually quadratic functions only. This paper proposes a new neural network approach that can be used to solve a broad variety of continuous optimisation problems since it makes no assumptions about the nature of the objective function. The approach comprises two stages: first a feedforward neural network is used to approximate the optimisation function based on a sample of evaluated data points; then a feedback neural network is used to perform gradient descent on this approximation function. The final solution is a local minima of the approximated function, which should coincide with true local minima if the learning has been accurate. The proposed method is evaluated on the De Jong test suite: a collection of continuous optimisation problems featuring various characteristics such as saddlepoints, discontinuities, and noise.