Lagrange multipliers and optimality
SIAM Review
Modified Projection-Type Methods for Monotone Variational Inequalities
SIAM Journal on Control and Optimization
Nonlinear Control Systems
Neural Networks for Optimization and Signal Processing
Neural Networks for Optimization and Signal Processing
A Novel Method to Handle Inequality Constraints for Convex Programming Neural Network
Neural Processing Letters
Line Search Filter Methods for Nonlinear Programming: Motivation and Global Convergence
SIAM Journal on Optimization
Paper: The internal model principle of control theory
Automatica (Journal of IFAC)
A new neural network for solving linear and quadratic programming problems
IEEE Transactions on Neural Networks
A general methodology for designing globally convergent optimization neural networks
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Exponential stability of globally projected dynamic systems
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
A novel neural network for nonlinear convex programming
IEEE Transactions on Neural Networks
A neural network for a class of convex quadratic minimax problems with constraints
IEEE Transactions on Neural Networks
A recurrent neural network for solving nonlinear convex programs subject to linear constraints
IEEE Transactions on Neural Networks
A novel neural network for variational inequalities with linear and nonlinear constraints
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Solving linear programming problems with neural networks: a comparative study
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Intrinsically, Lagrange multipliers in nonlinear programming algorithms play a regulating role in the process of searching optimal solution of constrained optimization problems. Hence, they can be regarded as the counterpart of control input variables in control systems. From this perspective, it is demonstrated that constructing nonlinear programming neural networks may be formulated into solving servomechanism problems with unknown equilibrium point which coincides with optimal solution. In this paper, under second-order sufficient assumption of nonlinear programming problems, a dynamic output feedback control law analogous to that of nonlinear servomechanism problems is proposed to stabilize the corresponding nonlinear programming neural networks. Moreover, the asymptotical stability is shown by Lyapunov First Approximation Principle.