Solving convex optimization problems using recurrent neural networks in finite time

  • Authors:
  • Long Cheng;Zeng-Guang Hou;Noriyasu Homma;Min Tan;Madam M. Gupta

  • Affiliations:
  • Key Laboratory of Complex Systems and Intelligence Science, Institute of Automation, Chinese Academy of Sciences, Beijing, China;Key Laboratory of Complex Systems and Intelligence Science, Institute of Automation, Chinese Academy of Sciences, Beijing, China;School of Health Sciences, Faculty of Medicine, Tohoku University, Sendai, Japan;Key Laboratory of Complex Systems and Intelligence Science, Institute of Automation, Chinese Academy of Sciences, Beijing, China;Intelligent Systems Research Laboratory, College of Engineering, University of Saskatchewan, Saskatoon, Saskatchewan, Canada

  • Venue:
  • IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

A recurrent neural network is proposed to deal with the convex optimization problem. By employing a specific nonlinear unit, the proposed neural network is proved to be convergent to the optimal solution in finite time, which increases the computation efficiency dramatically. Compared with most of existing stability conditions, i.e., asymptotical stability and exponential stability, the obtained finite-time stability result is more attractive, and therefore could be considered as a useful supplement to the current literature. In addition, a switching structure is suggested to further speed up the neural network convergence. Moreover, by using the penalty function method, the proposed neural network can be extended straightforwardly to solving the constrained optimization problem. Finally, the satisfactory performance of the proposed approach is illustrated by two simulation examples.