Annals of Operations Research - Special issue on hierarchical optimization
Primal-relaxed dual global optimization approach
Journal of Optimization Theory and Applications
Global minimization by reducing the duality gap
Mathematical Programming: Series A and B
Dual decomposition of a single-machine scheduling problem
Mathematical Programming: Series A and B
SIAM Journal on Control and Optimization
Lagrange duality and partitioning techniques in nonconvex global optimization
Journal of Optimization Theory and Applications
Local convexification of the Lagrangian function in nonconvex optimization
Journal of Optimization Theory and Applications
Convergent Outer Approximation Algorithms for Solving Unary Programs
Journal of Global Optimization
Efficient and Adaptive Lagrange-Multiplier Methods for Nonlinear Continuous Global Optimization
Journal of Global Optimization
An Algorithm for Global Minimization of Linearly Constrained Quadratic Functions
Journal of Global Optimization
A Lagrangian Based Branch-and-Bound Algorithm for Production-transportation Problems
Journal of Global Optimization
Interval Branch and Bound with Local Sampling for Constrained Global Optimization
Journal of Global Optimization
Multiterm polyhedral relaxations for nonconvex, quadratically constrained quadratic programs
Optimization Methods & Software - GLOBAL OPTIMIZATION
A fast memoryless interval-based algorithm for global optimization
Journal of Global Optimization
Evolutionary elementary cooperative strategy for global optimization
KES'06 Proceedings of the 10th international conference on Knowledge-Based Intelligent Information and Engineering Systems - Volume Part III
Mathematical and Computer Modelling: An International Journal
Hi-index | 0.00 |
Convex relaxations can be used to obtain lower bounds on the optimal objective function value of nonconvex quadratically constrained quadratic programs. However, for some problems, significantly better bounds can be obtained by minimizing the restricted Lagrangian function for a given estimate of the Lagrange multipliers. The difficulty in utilizing Lagrangian duality within a global optimization context is that the restricted Lagrangian is often nonconvex. Minimizing a convex underestimate of the restricted Lagrangian overcomes this difficulty and facilitates the use of Lagrangian duality within a global optimization framework. A branch-and-bound algorithm is presented that relies on these Lagrangian underestimates to provide lower bounds and on the interval Newton method to facilitate convergence in the neighborhood of the global solution. Computational results show that the algorithm compares favorably to the Reformulation–Linearization Technique for problems with a favorable structure.