A one-layer dual recurrent neural network with a heaviside step activation function for linear programming with its linear assignment application

  • Authors:
  • Qingshan Liu;Jun Wang

  • Affiliations:
  • School of Automation, Southeast University, Nanjing, China;Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong

  • Venue:
  • ICANN'11 Proceedings of the 21st international conference on Artificial neural networks - Volume Part II
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a one-layer recurrent neural network for solving linear programming problems. The proposed neural network is guaranteed to be globally convergent in finite time to the optimal solutions under a mild condition on a derived lower bound of a single gain parameter. The number of neurons in the neural network is the same as the number of decision variables of the dual optimization problem. Compared with the existing neural networks for linear programming, the proposed neural network has salient features such as finite-time convergence and lower model complexity. Specifically, the proposed neural network is tailored for solving the linear assignment problem with simulation results to demonstrate the effectiveness and characteristics of the proposed neural network.