An efficient algorithm for a class of fused lasso problems
Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining
Online learning for multi-task feature selection
CIKM '10 Proceedings of the 19th ACM international conference on Information and knowledge management
Dual Averaging Methods for Regularized Stochastic Learning and Online Optimization
The Journal of Machine Learning Research
Adaptive Subgradient Methods for Online Learning and Stochastic Optimization
The Journal of Machine Learning Research
Frequency-aware truncated methods for sparse online learning
ECML PKDD'11 Proceedings of the 2011 European conference on Machine learning and knowledge discovery in databases - Volume Part II
Proximal Algorithms for Multicomponent Image Recovery Problems
Journal of Mathematical Imaging and Vision
A Distributional Interpretation of Robust Optimization
Mathematics of Operations Research
Optimal distributed online prediction using mini-batches
The Journal of Machine Learning Research
Manifold identification in dual averaging for regularized stochastic online learning
The Journal of Machine Learning Research
A Simple but Usually Fast Branch-and-Bound Algorithm for the Capacitated Facility Location Problem
INFORMS Journal on Computing
Descriptor learning using convex optimisation
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part I
Stochastic coordinate descent methods for regularized smooth and nonsmooth losses
ECML PKDD'12 Proceedings of the 2012 European conference on Machine Learning and Knowledge Discovery in Databases - Volume Part I
Improving confidence of dual averaging stochastic online learning via aggregation
KI'12 Proceedings of the 35th Annual German conference on Advances in Artificial Intelligence
Efficient online learning for multitask feature selection
ACM Transactions on Knowledge Discovery from Data (TKDD)
Hi-index | 0.00 |
In this paper we present a new approach for constructing subgradient schemes for different types of nonsmooth problems with convex structure. Our methods are primal-dual since they are always able to generate a feasible approximation to the optimum of an appropriately formulated dual problem. Besides other advantages, this useful feature provides the methods with a reliable stopping criterion. The proposed schemes differ from the classical approaches (divergent series methods, mirror descent methods) by presence of two control sequences. The first sequence is responsible for aggregating the support functions in the dual space, and the second one establishes a dynamically updated scale between the primal and dual spaces. This additional flexibility allows to guarantee a boundedness of the sequence of primal test points even in the case of unbounded feasible set (however, we always assume the uniform boundedness of subgradients). We present the variants of subgradient schemes for nonsmooth convex minimization, minimax problems, saddle point problems, variational inequalities, and stochastic optimization. In all situations our methods are proved to be optimal from the view point of worst-case black-box lower complexity bounds.