A Novel Method to Handle Inequality Constraints for Convex Programming Neural Network
Neural Processing Letters
A theory of complexity for continuous time systems
Journal of Complexity
Probabilistic analysis of a differential equation for linear programming
Journal of Complexity
Design and analysis of an efficient neural network model for solving nonlinear optimization problems
International Journal of Systems Science
A 2D approach to tomographic image reconstruction using a Hopfield-type neural network
Artificial Intelligence in Medicine
ISNN '07 Proceedings of the 4th international symposium on Neural Networks: Advances in Neural Networks
A Neural Network Model for Solving Nonlinear Optimization Problems with Real-Time Applications
ISNN 2009 Proceedings of the 6th International Symposium on Neural Networks: Advances in Neural Networks - Part III
Subgradient-based neural networks for nonsmooth nonconvex optimization problems
IEEE Transactions on Neural Networks
On a Stabilization Problem of Nonlinear Programming Neural Networks
Neural Processing Letters
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics - Special issue on game theory
A discrete-time neural network for optimization problems with hybrid constraints
IEEE Transactions on Neural Networks
Neuro-genetic approach for solving constrained nonlinear optimization problems
ICONIP'06 Proceedings of the 13th international conference on Neural information processing - Volume Part III
A dynamical model for solving degenerate quadratic minimax problems with constraints
Journal of Computational and Applied Mathematics
An adaptive control for AC servo system using recurrent fuzzy neural network
ICNC'05 Proceedings of the First international conference on Advances in Natural Computation - Volume Part II
Neural networks for optimization problem with nonlinear constraints
ICONIP'06 Proceedings of the 13th international conference on Neural Information Processing - Volume Part II
Existence and stability of periodic solution in a class of impulsive neural networks
ISNN'05 Proceedings of the Second international conference on Advances in Neural Networks - Volume Part I
A generalized global convergence theory of projection-type neural networks for optimization
CIS'05 Proceedings of the 2005 international conference on Computational Intelligence and Security - Volume Part I
ISNN'06 Proceedings of the Third international conference on Advnaces in Neural Networks - Volume Part II
A recurrent neural network for linear fractional programming with bound constraints
ISNN'06 Proceedings of the Third international conference on Advances in Neural Networks - Volume Part I
A neural network algorithm for solving quadratic programming based on fibonacci method
ISNN'10 Proceedings of the 7th international conference on Advances in Neural Networks - Volume Part I
PSF-Constraints based iterative blind deconvolution method for image deblurring
MMM'10 Proceedings of the 16th international conference on Advances in Multimedia Modeling
A novel neural network for solving singular nonlinear convex optimization problems
ICONIP'11 Proceedings of the 18th international conference on Neural Information Processing - Volume Part II
Solving general convex nonlinear optimization problems by an efficient neurodynamic model
Engineering Applications of Artificial Intelligence
An application of a merit function for solving convex programming problems
Computers and Industrial Engineering
Hi-index | 0.00 |
This paper presents a continuous-time recurrent neural-network model for nonlinear optimization with any continuously differentiable objective function and bound constraints. Quadratic optimization with bound constraints is a special problem which can be solved by the recurrent neural network. The proposed recurrent neural network has the following characteristics. 1) It is regular in the sense that any optimum of the objective function with bound constraints is also an equilibrium point of the neural network. If the objective function to be minimized is convex, then the recurrent neural network is complete in the sense that the set of optima of the function with bound constraints coincides with the set of equilibria of the neural network. 2) The recurrent neural network is primal and quasiconvergent in the sense that its trajectory cannot escape from the feasible region and will converge to the set of equilibria of the neural network for any initial point in the feasible bound region. 3) The recurrent neural network has an attractivity property in the sense that its trajectory will eventually converge to the feasible region for any initial states even at outside of the bounded feasible region. 4) For minimizing any strictly convex quadratic objective function subject to bound constraints, the recurrent neural network is globally exponentially stable for almost any positive network parameters. Simulation results are given to demonstrate the convergence and performance of the proposed recurrent neural network for nonlinear optimization with bound constraints.