Minimization methods for non-differentiable functions
Minimization methods for non-differentiable functions
Lagrange multipliers and optimality
SIAM Review
Optimization: algorithms and consistent approximations
Optimization: algorithms and consistent approximations
Convex analysis and variational problems
Convex analysis and variational problems
On augmented Lagrangians for Optimization Problems with a Single Constraint
Journal of Global Optimization
Canonical Duality Theory and Solutions to Constrained Nonconvex Quadratic Programming
Journal of Global Optimization
On a Modified Subgradient Algorithm for Dual Problems via Sharp Augmented Lagrangian*
Journal of Global Optimization
Material Requirement Planning with fuzzy constraints and fuzzy coefficients
Fuzzy Sets and Systems
Optimization and Knowledge-Based Technologies
Informatica
An inexact modified subgradient algorithm for nonconvex optimization
Computational Optimization and Applications
A primal dual modified subgradient algorithm with sharp Lagrangian
Journal of Global Optimization
A Nonlinear Cone Separation Theorem and Scalarization in Nonconvex Vector Optimization
SIAM Journal on Optimization
Iterative super-resolution reconstruction using modified subgradient method
MRCS'06 Proceedings of the 2006 international conference on Multimedia Content Representation, Classification and Security
Weak stability and strong duality of a class of nonconvex infinite programs via augmented Lagrangian
Journal of Global Optimization
A conic scalarization method in multi-objective optimization
Journal of Global Optimization
Generalized quadratic multiple knapsack problem and two solution approaches
Computers and Operations Research
Hi-index | 0.00 |
In this paper we present augmented Lagrangians for nonconvex minimization problems with equality constraints. We construct a dual problem with respect to the presented here Lagrangian, give the saddle point optimality conditions and obtain strong duality results. We use these results and modify the subgradient and cutting plane methods for solving the dual problem constructed. Algorithms proposed in this paper have some advantages. We do not use any convexity and differentiability conditions, and show that the dual problem is always concave regardless of properties the primal problem satisfies. The subgradient of the dual function along which its value increases is calculated without solving any additional problem. In contrast with the penalty or multiplier methods, for improving the value of the dual function, one need not to take the `penalty like parameter' to infinity in the new methods. In both methods the value of the dual function strongly increases at each iteration. In the contrast, by using the primal-dual gap, the proposed algorithms possess a natural stopping criteria. The convergence theorem for the subgradient method is also presented.