A One-Layer Recurrent Neural Network With a Discontinuous Hard-Limiting Activation Function for Quadratic Programming

  • Authors:
  • Qingshan Liu;Jun Wang

  • Affiliations:
  • Chinese Univ. of Hong Kong, Hong Kong;-

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, a one-layer recurrent neural network with a discontinuous hard-limiting activation function is proposed for quadratic programming. This neural network is capable of solving a large class of quadratic programming problems. The state variables of the neural network are proven to be globally stable and the output variables are proven to be convergent to optimal solutions as long as the objective function is strictly convex on a set defined by the equality constraints. In addition, a sequential quadratic programming approach based on the proposed recurrent neural network is developed for general nonlinear programming. Simulation results on numerical examples and support vector machine (SVM) learning show the effectiveness and performance of the neural network.