Mathematical control theory: an introduction
Mathematical control theory: an introduction
A new method for a class of linear variational inequalities
Mathematical Programming: Series A and B
Solving nonlinear complementarity problems with neural networks: a reformulation method approach
Journal of Computational and Applied Mathematics
Neural Networks for Optimization and Signal Processing
Neural Networks for Optimization and Signal Processing
A neural network for the linear complementarity problem
Mathematical and Computer Modelling: An International Journal
A high-performance neural network for solving linear and quadratic programming problems
IEEE Transactions on Neural Networks
A general methodology for designing globally convergent optimization neural networks
IEEE Transactions on Neural Networks
Solving linear programming problems with neural networks: a comparative study
IEEE Transactions on Neural Networks
Journal of Global Optimization
A Gradient-based Continuous Method for Large-scale Optimization Problems
Journal of Global Optimization
A novel neural network for a class of convex quadratic minimax problems
Neural Computation
Convergence analysis of the Levenberg-Marquardt method
Optimization Methods & Software
Information Sciences: an International Journal
A new one-layer neural network for linear and quadratic programming
IEEE Transactions on Neural Networks
Neural networks for solving second-order cone constrained variational inequality problem
Computational Optimization and Applications
Information Sciences: an International Journal
Hi-index | 0.00 |
The paper introduces a new approach to analyze the stability of neural network models without using any Lyapunov function. With the new approach, we investigate the stability properties of the general gradient-based neural network model for optimization problems. Our discussion includes both isolated equilibrium points and connected equilibrium sets which could be unbounded. For a general optimization problem, if the objective function is bounded below and its gradient is Lipschitz continuous, we prove that (a) any trajectory of the gradient-based neural network converges to an equilibrium point, and (b) the Lyapunov stability is equivalent to the asymptotical stability in the gradient-based neural networks. For a convex optimization problem, under the same assumptions, we show that any trajectory of gradient-based neural networks will converge to an asymptotically stable equilibrium point of the neural networks. For a general nonlinear objective function, we propose a refined gradient-based neural network, whose trajectory with any arbitrary initial point will converge to an equilibrium point, which satisfies the second order necessary optimality conditions for optimization problems. Promising simulation results of a refined gradient-based neural network on some problems are also reported.