SODA '07 Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 2
Rounding of convex sets and efficient gradient methods for linear programming problems
Optimization Methods & Software
Unconstrained Convex Minimization in Relative Scale
Mathematics of Operations Research
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 1
First-order algorithm with O(ln(1/ε )) convergence for ε -equilibrium in two-person zero-sum games
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 1
Gradient-based algorithms for finding Nash equilibria in extensive form games
WINE'07 Proceedings of the 3rd international conference on Internet and network economics
Smoothing Techniques for Computing Nash Equilibria of Sequential Games
Mathematics of Operations Research
Speeding up gradient-based algorithms for sequential games
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
Artificial Intelligence
Accelerated training of max-margin Markov networks with kernels
ALT'11 Proceedings of the 22nd international conference on Algorithmic learning theory
Trace Norm Regularization: Reformulations, Algorithms, and Multi-Task Learning
SIAM Journal on Optimization
Applying Metric Regularity to Compute a Condition Measure of a Smoothing Algorithm for Matrix Games
SIAM Journal on Optimization
IPCO'05 Proceedings of the 11th international conference on Integer Programming and Combinatorial Optimization
Near-optimal no-regret algorithms for zero-sum games
Proceedings of the twenty-second annual ACM-SIAM symposium on Discrete Algorithms
New approximation algorithms for minimum enclosing convex shapes
Proceedings of the twenty-second annual ACM-SIAM symposium on Discrete Algorithms
Case-based strategies in computer poker
AI Communications
Computational Optimization and Applications
Computational Optimization and Applications
Accelerated training of max-margin Markov networks with kernels
Theoretical Computer Science
Algorithms and hardness results for parallel large margin learning
The Journal of Machine Learning Research
A linearly convergent first-order algorithm for total variation minimisation in image processing
International Journal of Bioinformatics Research and Applications
Hi-index | 0.00 |
In this paper we introduce a new primal-dual technique for convergence analysis of gradient schemes for nonsmooth convex optimization. As an example of its application, we derive a primal-dual gradient method for a special class of structured nonsmooth optimization problems, which ensures a rate of convergence of order O(1/k), where k is the iteration count. Another example is a gradient scheme, which minimizes a nonsmooth strongly convex function with known structure with rate of convergence O(1/k2). In both cases the efficiency of the methods is higher than the corresponding black-box lower complexity bounds by an order of magnitude.