Minimization methods for non-differentiable functions
Minimization methods for non-differentiable functions
A generalization of Polyak's convergence result for subgradient optimization
Mathematical Programming: Series A and B
Proximity control in bundle methods for convex
Mathematical Programming: Series A and B
Convergence of a generalized subgradient method for nondifferentiable convex optimization
Mathematical Programming: Series A and B
Pracniques: construction of nonlinear programming test problems
Communications of the ACM
A variable target value method for nondifferentiable optimization
Operations Research Letters
Computational Optimization and Applications
Enhancing Lagrangian Dual Optimization for Linear Programs by Obviating Nondifferentiability
INFORMS Journal on Computing
The prize collecting Steiner tree problem: models and Lagrangian dual optimization approaches
Computational Optimization and Applications
Portfolio optimization by minimizing conditional value-at-risk via nondifferentiable optimization
Computational Optimization and Applications
On embedding the volume algorithm in a variable target value method
Operations Research Letters
Hi-index | 0.00 |
In this paper, we present variants of Shor and Zhurbenko's r-algorithm, motivated by the memoryless and limited memory updates for differentiable quasi-Newton methods. This well known r-algorithm, which employs a space dilation strategy in the direction of the difference between two successive subgradients, is recognized as being one of the most effective procedures for solving nondifferentiable optimization problems. However, the method needs to store the space dilation matrix and update it at every iteration, resulting in a substantial computational burden for large-sized problems. To circumvent this difficulty, we first propose a memoryless update scheme, which under a suitable choice of parameters, yields a direction of motion that turns out to be a convex combination of two successive anti-subgradients. Moreover, in the space transformation sense, the new update scheme can be viewed as a combination of space dilation and reduction operations. We prove convergence of this new method, and demonstrate how it can be used in conjunction with a variable target value method that allows a practical, convergent implementation of the method. We also examine a memoryless variant that uses a fixed dilation parameter instead of varying degrees of dilation and/or reduction as in the former algorithm, as well as another variant that examines a two-step limited memory update. These variants are tested along with Shor's r-algorithm and also a modified version of a related algorithm due to Polyak that employs a projection onto a pair of Kelley's cutting planes. We use a set of standard test problems from the literature as well as randomly generated dual transportation and assignment problems in our computational experiments. The results exhibit that the proposed space dilation and reduction method and the modification of Polyak's method are competitive, and offer a substantial advantage over the r-algorithm and over the other tested limited memory variants with respect to accuracy as well as effort.