Auxiliary problem principle extended to variational inequalities
Journal of Optimization Theory and Applications
Equilibrium programming using proximal-like algorithms
Mathematical Programming: Series A and B
A Logarithmic-Quadratic Proximal Method for Variational Inequalities
Computational Optimization and Applications - Special issue on computational optimization—a tribute to Olvi Mangasarian, part I
Interior Proximal and Multiplier Methods Based on Second Order Homogeneous Kernels
Mathematics of Operations Research
A Relative Error Tolerance for a Family of Generalized Proximal Point Methods
Mathematics of Operations Research
Lagrangian Duality and Related Multiplier Methods for Variational Inequality Problems
SIAM Journal on Optimization
Interior Gradient and Epsilon-Subgradient Descent Methods for Constrained Convex Minimization
Mathematics of Operations Research
Interior projection-like methods for monotone variational inequalities
Mathematical Programming: Series A and B
Interior Gradient and Proximal Methods for Convex and Conic Optimization
SIAM Journal on Optimization
An LQP Method for Pseudomonotone Variational Inequalities
Journal of Global Optimization
Modified proximal-point method for nonlinear complementarity problems
Journal of Computational and Applied Mathematics
A bundle method for solving equilibrium problems
Mathematical Programming: Series A and B - Nonlinear convex optimization and variational inequalities
On Nash---Cournot oligopolistic market equilibrium models with concave cost functions
Journal of Global Optimization
Dual extragradient algorithms extended to equilibrium problems
Journal of Global Optimization
Interior point methods for equilibrium problems
Computational Optimization and Applications
Hi-index | 0.00 |
In this article we present a new and efficient method for solving equilibrium problems on polyhedra. The method is based on an interior-quadratic proximal term which replaces the usual quadratic proximal term. This leads to an interior proximal type algorithm. Each iteration consists in a prediction step followed by a correction step as in the extragradient method. In a first algorithm each of these steps is obtained by solving an unconstrained minimization problem, while in a second algorithm the correction step is replaced by an Armijo-backtracking linesearch followed by an hyperplane projection step. We prove that our algorithms are convergent under mild assumptions: pseudomonotonicity for the two algorithms and a Lipschitz property for the first one. Finally we present some numerical experiments to illustrate the behavior of the proposed algorithms.