A Logarithmic-Quadratic Proximal Method for Variational Inequalities
Computational Optimization and Applications - Special issue on computational optimization—a tribute to Olvi Mangasarian, part I
Journal of Global Optimization
Local convergence analysis of projection-type algorithms: unified approach
Journal of Optimization Theory and Applications
Some recent advances in projection-type methods for variational inequalities
Journal of Computational and Applied Mathematics - Proceedings of the international conference on recent advances in computational mathematics
Computational Optimization and Applications
Computers & Mathematics with Applications
A new criterion for the inexact logarithmic-quadratic proximal method and its derived hybrid methods
Journal of Global Optimization
A generalized proximal-point-based prediction-correction method for variational inequality problems
Journal of Computational and Applied Mathematics
A new predicto-corrector method for pseudomonotone nonlinear complementarity problems
International Journal of Computer Mathematics
Relaxed proximal point algorithms for variational inequalities with multi-valued operators
Optimization Methods & Software
Approximate generalized proximal-type method for convex vector optimization problem in Banach spaces
Computers & Mathematics with Applications
Pseudomonotone operators and the Bregman Proximal Point Algorithm
Journal of Global Optimization
Inexact Proximal Point Methods for Variational Inequality Problems
SIAM Journal on Optimization
Interior proximal methods for quasiconvex optimization
Journal of Global Optimization
Full length article: Attouch-Théra duality revisited: Paramonotonicity and operator splitting
Journal of Approximation Theory
Interior point methods for equilibrium problems
Computational Optimization and Applications
Hi-index | 0.00 |
We consider a generalized proximal point method for solving variational inequality problems with monotone operators in a Hilbert space. It differs from the classical proximal point method (as discussed by Rockafellar for the problem of finding zeroes of monotone operators) in the use of generalized distances, called Bregman distances, instead of the Euclidean one. These distances play not only a regularization role but also a penalization one, forcing the sequence generated by the method to remain in the interior of the feasible set so that the method becomes an interior point one. Under appropriate assumptions on the Bregman distance and the monotone operator we prove that the sequence converges (weakly) if and only if the problem has solutions, in which case the weak limit is a solution. If the problem does not have solutions, then the sequence is unbounded. We extend similar previous results for the proximal point method with Bregman distances which dealt only with the finite dimensional case and which applied only to convex optimization problems or to finding zeroes of monotone operators, which are particular cases of variational inequality problems.