Finite termination of the proximal point algorithm
Mathematical Programming: Series A and B
Proximal minimization algorithm with D-functions
Journal of Optimization Theory and Applications
Entropic proximal mappings with applications to nonlinear programming
Mathematics of Operations Research
Nonlinear proximal point algorithms using Bregman functions, with applications to convex programming
Mathematics of Operations Research
Weak sharp minima in mathematical programming
SIAM Journal on Control and Optimization
On the convergence of the exponential multiplier method for convex programming
Mathematical Programming: Series A and B
A class of smoothing functions for nonlinear and mixed complementarity problems
Computational Optimization and Applications
On the twice differentiable cubic augmented Lagrangian
Journal of Optimization Theory and Applications
Mathematical Programming: Series A and B
Approximate iterations in Bregman-function-based proximal algorithms
Mathematical Programming: Series A and B
A Logarithmic-Quadratic Proximal Method for Variational Inequalities
Computational Optimization and Applications - Special issue on computational optimization—a tribute to Olvi Mangasarian, part I
Interior Proximal and Multiplier Methods Based on Second Order Homogeneous Kernels
Mathematics of Operations Research
Lagrangian Duality and Related Multiplier Methods for Variational Inequality Problems
SIAM Journal on Optimization
Rescaling and Stepsize Selection in Proximal Methods Using Separable Generalized Distances
SIAM Journal on Optimization
Inexact Proximal Point Methods for Variational Inequality Problems
SIAM Journal on Optimization
Hi-index | 0.00 |
We consider the variational inequality problem formed by a general set-valued maximal monotone operator and a possibly unbounded "box" in $${{\mathbb R}^n}$$, and study its solution by proximal methods whose distance regularizations are coercive over the box. We prove convergence for a class of double regularizations generalizing a previously-proposed class of Auslender et al. Using these results, we derive a broadened class of augmented Lagrangian methods. We point out some connections between these methods and earlier work on "pure penalty" smoothing methods for complementarity; this connection leads to a new form of augmented Lagrangian based on the "neural" smoothing function. Finally, we computationally compare this new kind of augmented Lagrangian to three previously-known varieties on the MCPLIB problem library, and show that the neural approach offers some advantages. In these tests, we also consider primal-dual approaches that include a primal proximal term. Such a stabilizing term tends to slow down the algorithms, but makes them more robust.