A Unified Continuous Optimization Framework for Center-Based Clustering Methods
The Journal of Machine Learning Research
Nonmonotone projected gradient methods based on barrier and Euclidean distances
Computational Optimization and Applications
The interior proximal extragradient method for solving equilibrium problems
Journal of Global Optimization
Dual Averaging Methods for Regularized Stochastic Learning and Online Optimization
The Journal of Machine Learning Research
Inexact Proximal Point Methods for Variational Inequality Problems
SIAM Journal on Optimization
A Moving Balls Approximation Method for a Class of Smooth Constrained Minimization Problems
SIAM Journal on Optimization
An alternating direction method for finding Dantzig selectors
Computational Statistics & Data Analysis
Hi-index | 0.00 |
Interior gradient (subgradient) and proximal methods for convex constrained minimization have been much studied, in particular for optimization problems over the nonnegative octant. These methods are using non-Euclidean projections and proximal distance functions to exploit the geometry of the constraints. In this paper, we identify a simple mechanism that allows us to derive global convergence results of the produced iterates as well as improved global rates of convergence estimates for a wide class of such methods, and with more general convex constraints. Our results are illustrated with many applications and examples, including some new explicit and simple algorithms for conic optimization problems. In particular, we derive a class of interior gradient algorithms which exhibits an $O(k^{-2})$ global convergence rate estimate.