Minimization methods for non-differentiable functions
Minimization methods for non-differentiable functions
Proximity control in bundle methods for convex
Mathematical Programming: Series A and B
Variable target value subgradient method
Mathematical Programming: Series A and B
A modified subgradient algorithm for Lagrangean relaxation
Computers and Operations Research
Limited Memory Space Dilation and Reduction Algorithms
Computational Optimization and Applications
A variable target value method for nondifferentiable optimization
Operations Research Letters
On embedding the volume algorithm in a variable target value method
Operations Research Letters
Enhancing Lagrangian Dual Optimization for Linear Programs by Obviating Nondifferentiability
INFORMS Journal on Computing
Portfolio optimization by minimizing conditional value-at-risk via nondifferentiable optimization
Computational Optimization and Applications
An infeasible-point subgradient method using adaptive approximate projections
Computational Optimization and Applications
Hi-index | 0.02 |
We consider two variable target value frameworks for solving large-scale nondifferentiable optimization problems. We provide convergence analyses for various combinations of these variable target value frameworks with several direction-finding and step-length strategies including the pure subgradient method, the volume algorithm, the average direction strategy, and a generalized Polyak-Kelley cutting plane method. In addition, we suggest a further enhancement via a projected quadratic-fit line-search whenever any of these algorithmic procedures experiences an improvement in the objective value. Extensive computational results on different classes of problems reveal that these modifications and enhancements significantly improve the effectiveness of the algorithms to solve Lagrangian duals of linear programs, even yielding a favorable comparison against the commercial software CPLEX 8.1.