Lagrange multipliers and optimality
SIAM Review
Nonsmooth analysis and control theory
Nonsmooth analysis and control theory
Mathematical control theory: deterministic finite dimensional systems (2nd ed.)
Mathematical control theory: deterministic finite dimensional systems (2nd ed.)
Neural Networks for Optimization and Signal Processing
Neural Networks for Optimization and Signal Processing
A Novel Method to Handle Inequality Constraints for Convex Programming Neural Network
Neural Processing Letters
Neural Units with Higher-Order Synaptic Operations for Robotic Image Processing Applications
Soft Computing - A Fusion of Foundations, Methodologies and Applications - Fuzzy-neural computation and robotics
A Recurrent Neural Network for Hierarchical Control of Interconnected Dynamic Systems
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Inspired by the Lagrangian multiplier method with quadratic penalty function, which is widely used in Nonlinear Programming Theory, a Lagrange-type nonlinear programming neural network whose equilibria coincide with KKT pairs of the underlying nonlinear programming problem was devised with minor modification in regard to handling inequality constraints[1,2]. Of course, the structure of neural network must be elaborately conceived so that it is asymptotically stable. Normally this aim is not easy to be achieved even for the simple nonlinear programming problems. However, if the penalty parameters in these neural networks are taken as control variables and a control law is found to stabilize it, we may reasonably conjecture that the categories of solvable nonlinear programming problems will be greatly increased. In this paper, the conditions stabilizing the Lagrange-type neural network are presented and control-Lyapunov function approach is used to synthesize the adjusting laws of penalty parameters.