Practical methods of optimization; (2nd ed.)
Practical methods of optimization; (2nd ed.)
Parallel and distributed computation: numerical methods
Parallel and distributed computation: numerical methods
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
Strong Semismoothness of the Fischer-Burmeister SDC and SOC Complementarity Functions
Mathematical Programming: Series A and B
Journal of Computational and Applied Mathematics
Computers & Mathematics with Applications
Information Sciences: an International Journal
Computational Optimization and Applications
Efficient recurrent neural network model for the solution of general nonlinear optimization problems
Optimization Methods & Software
A new one-layer neural network for linear and quadratic programming
IEEE Transactions on Neural Networks
A dynamical model for solving degenerate quadratic minimax problems with constraints
Journal of Computational and Applied Mathematics
Neural networks for solving second-order cone constrained variational inequality problem
Computational Optimization and Applications
A capable neural network model for solving the maximum flow problem
Journal of Computational and Applied Mathematics
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
IEEE Transactions on Neural Networks
A high-performance feedback neural network for solving convex nonlinear programming problems
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
A novel neural network for nonlinear convex programming
IEEE Transactions on Neural Networks
Linear and quadratic programming neural network analysis
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
This paper presents a gradient neural network model for solving convex nonlinear programming (CNP) problems. The main idea is to convert the CNP problem into an equivalent unconstrained minimization problem with objective energy function. A gradient model is then defined directly using the derivatives of the energy function. It is also shown that the proposed neural network is stable in the sense of Lyapunov and can converge to an exact optimal solution of the original problem. It is also found that a larger scaling factor leads to a better convergence rate of the trajectory. The validity and transient behavior of the neural network are demonstrated by using various examples.