Entropic proximal mappings with applications to nonlinear programming
Mathematics of Operations Research
Proximal Minimization Methods with Generalized Bregman Functions
SIAM Journal on Control and Optimization
An Interior Proximal Algorithm and the Exponential Multiplier Method for Semidefinite Programming
SIAM Journal on Optimization
Incremental Subgradient Methods for Nondifferentiable Optimization
SIAM Journal on Optimization
The Ordered Subsets Mirror Descent Optimization Method with Applications to Tomography
SIAM Journal on Optimization
Recursive Aggregation of Estimators by the Mirror Descent Algorithm with Averaging
Problems of Information Transmission
Mathematics of Operations Research
Exponentiated gradient algorithms for log-linear structured prediction
Proceedings of the 24th international conference on Machine learning
Nonmonotone projected gradient methods based on barrier and Euclidean distances
Computational Optimization and Applications
Efficient projections onto the l1-ball for learning in high dimensions
Proceedings of the 25th international conference on Machine learning
Exponentiated Gradient Algorithms for Conditional Random Fields and Max-Margin Markov Networks
The Journal of Machine Learning Research
Column subset selection, matrix factorization, and eigenvalue optimization
SODA '09 Proceedings of the twentieth Annual ACM-SIAM Symposium on Discrete Algorithms
Stochastic methods for l1 regularized loss minimization
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Sequential probability assignment via online convex programming using exponential families
ISIT'09 Proceedings of the 2009 IEEE international conference on Symposium on Information Theory - Volume 2
Efficient Online and Batch Learning Using Forward Backward Splitting
The Journal of Machine Learning Research
Fast SDP algorithms for constraint satisfaction problems
SODA '10 Proceedings of the twenty-first annual ACM-SIAM symposium on Discrete Algorithms
LACBoost and FisherBoost: optimally building cascade classifiers
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part II
Linear Algorithms for Online Multitask Classification
The Journal of Machine Learning Research
Variable Sparsity Kernel Learning
The Journal of Machine Learning Research
Distributed algorithms via gradient descent for fisher markets
Proceedings of the 12th ACM conference on Electronic commerce
Stochastic Methods for l1-regularized Loss Minimization
The Journal of Machine Learning Research
Adaptive Subgradient Methods for Online Learning and Stochastic Optimization
The Journal of Machine Learning Research
Accelerated training of max-margin Markov networks with kernels
ALT'11 Proceedings of the 22nd international conference on Algorithmic learning theory
New approximation algorithms for minimum enclosing convex shapes
Proceedings of the twenty-second annual ACM-SIAM symposium on Discrete Algorithms
Multi kernel learning with online-batch optimization
The Journal of Machine Learning Research
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Two
The CoMirror algorithm for solving nonsmooth constrained convex problems
Operations Research Letters
Relational co-clustering via manifold ensemble learning
Proceedings of the 21st ACM international conference on Information and knowledge management
Mirror descent for metric learning: a unified approach
ECML PKDD'12 Proceedings of the 2012 European conference on Machine Learning and Knowledge Discovery in Databases - Volume Part I
Guided learning for role discovery (GLRD): framework, algorithms, and applications
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining
Smoothing multivariate performance measures
The Journal of Machine Learning Research
Accelerated training of max-margin Markov networks with kernels
Theoretical Computer Science
Hi-index | 0.00 |
The mirror descent algorithm (MDA) was introduced by Nemirovsky and Yudin for solving convex optimization problems. This method exhibits an efficiency estimate that is mildly dependent in the decision variables dimension, and thus suitable for solving very large scale optimization problems. We present a new derivation and analysis of this algorithm. We show that the MDA can be viewed as a nonlinear projected-subgradient type method, derived from using a general distance-like function instead of the usual Euclidean squared distance. Within this interpretation, we derive in a simple way convergence and efficiency estimates. We then propose an Entropic mirror descent algorithm for convex minimization over the unit simplex, with a global efficiency estimate proven to be mildly dependent in the dimension of the problem.