Parallel and distributed computation: numerical methods
Parallel and distributed computation: numerical methods
USSR Computational Mathematics and Mathematical Physics
A globally convergent Newton method for solving strongly monotone variational inequalities
Mathematical Programming: Series A and B
Modified Projection-Type Methods for Monotone Variational Inequalities
SIAM Journal on Control and Optimization
A class of iterative methods for solving nonlinear projection equations
Journal of Optimization Theory and Applications
An Efficient Algorithm for Minimizing a Sum of Euclidean Norms with Applications
SIAM Journal on Optimization
Improvements of some projection methods for monotone nonlinear variational inequalities
Journal of Optimization Theory and Applications
Comparison of Two Kinds of Prediction-Correction Methods for Monotone Variational Inequalities
Computational Optimization and Applications
Structured Prediction, Dual Extragradient and Bregman Projections
The Journal of Machine Learning Research
Large margin training of acoustic models for speech recognition
Large margin training of acoustic models for speech recognition
Games with coupled propagated constraints in optical networks with multi-link topologies
Automatica (Journal of IFAC)
Large margin transformation learning
Large margin transformation learning
Discriminative machine learning with structure
Discriminative machine learning with structure
Computational Optimization and Applications
Computational Optimization and Applications
Hi-index | 0.00 |
Nemirovski's analysis (SIAM J. Optim. 15:229---251, 2005) indicates that the extragradient method has the O(1/t) convergence rate for variational inequalities with Lipschitz continuous monotone operators. For the same problems, in the last decades, a class of Fej茅r monotone projection and contraction methods is developed. Until now, only convergence results are available to these projection and contraction methods, though the numerical experiments indicate that they always outperform the extragradient method. The reason is that the former benefits from the `optimal' step size in the contraction sense. In this paper, we prove the convergence rate under a unified conceptual framework, which includes the projection and contraction methods as special cases and thus perfects the theory of the existing projection and contraction methods. Preliminary numerical results demonstrate that the projection and contraction methods converge twice faster than the extragradient method.