Mathematical control theory: an introduction
Mathematical control theory: an introduction
An infeasible-interior-point algorithm for linear complementarity problems
Mathematical Programming: Series A and B
Solving nonlinear complementarity problems with neural networks: a reformulation method approach
Journal of Computational and Applied Mathematics
Neural Networks for Optimization and Signal Processing
Neural Networks for Optimization and Signal Processing
Smoothing Functions for Second-Order-Cone Complementarity Problems
SIAM Journal on Optimization
Stability Analysis of Gradient-Based Neural Networks for Optimization Problems
Journal of Global Optimization
SIAM Journal on Optimization
An unconstrained smooth minimization reformulation of the second-order cone complementarity problem
Mathematical Programming: Series A and B
Information Sciences: an International Journal
Computational Optimization and Applications
Synchronization control of a class of memristor-based recurrent neural networks
Information Sciences: an International Journal
Neural networks for solving second-order cone constrained variational inequality problem
Computational Optimization and Applications
A Recurrent Neural Network for Solving a Class of General Variational Inequalities
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
IEEE Transactions on Neural Networks
A recurrent neural network for solving nonlinear convex programs subject to linear constraints
IEEE Transactions on Neural Networks
Information Sciences: an International Journal
Information Sciences: an International Journal
Hi-index | 0.07 |
This paper proposes a neural network approach for efficiently solving general nonlinear convex programs with second-order cone constraints. The proposed neural network model was developed based on a smoothed natural residual merit function involving an unconstrained minimization reformulation of the complementarity problem. We study the existence and convergence of the trajectory of the neural network. Moreover, we show some stability properties for the considered neural network, such as the Lyapunov stability, asymptotic stability, and exponential stability. The examples in this paper provide a further demonstration of the effectiveness of the proposed neural network. This paper can be viewed as a follow-up version of [20,26] because more stability results are obtained.