Recurrent neural networks for linear programming: analysis and design principles
Computers and Operations Research - Special issue on neural networks and operations research
A deterministic annealing neural network for convex programming
Neural Networks
Asymmetric Hopfield-type networks: theory and applications
Neural Networks
Neural Networks for Optimization and Signal Processing
Neural Networks for Optimization and Signal Processing
A general methodology for designing globally convergent optimization neural networks
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Neurodynamic Analysis for the Schur Decomposition of the Box Problems
Computational Intelligence and Security
Hi-index | 0.00 |
This paper presents a novel recurrent time continuous neural network model which performs linear fractional optimization subject to bound constraints on each of the optimization variables. The network is proved to be complete in the sense that the set of optima of the objective function to be minimized with bound constraints coincides with the set of equilibria of the neural network. It is also shown that the network is primal and globally convergent in the sense that its trajectory cannot escape from the feasible region and will converge to an exact optimal solution for any initial point chosen in the feasible bound region. Simulation results are given to demonstrate further the global convergence and the good performance of the proposed neural network for linear fractional programming problems with bound constraints.