Subgradient and sampling algorithms for l1 regression
SODA '05 Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms
Spectral projected subgradient with a momentum term for the Lagrangean dual approach
Computers and Operations Research
An Incremental Method for Solving Convex Finite Min-Max Problems
Mathematics of Operations Research
Perceptron and SVM learning with generalized cost models
Intelligent Data Analysis
Exponentiated Gradient Algorithms for Conditional Random Fields and Max-Margin Markov Networks
The Journal of Machine Learning Research
A merit function approach to the subgradient method with averaging
Optimization Methods & Software
Adaptive Processing over Distributed Networks
IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences
On solving the Lagrangian dual of integer programs via an incremental approach
Computational Optimization and Applications
Normalized incremental subgradient algorithm and its application
IEEE Transactions on Signal Processing
An inexact modified subgradient algorithm for nonconvex optimization
Computational Optimization and Applications
Diffusion LMS strategies for distributed estimation
IEEE Transactions on Signal Processing
Prioritized flow optimization with generalized routing for scalable multirate multicasting
ICC'09 Proceedings of the 2009 IEEE international conference on Communications
Greedy gossip with eavesdropping
IEEE Transactions on Signal Processing
Distributed sparse linear regression
IEEE Transactions on Signal Processing
Dual Averaging Methods for Regularized Stochastic Learning and Online Optimization
The Journal of Machine Learning Research
A Randomized Incremental Subgradient Method for Distributed Optimization in Networked Systems
SIAM Journal on Optimization
Incremental Subgradients for Constrained Convex Optimization: A Unified Framework and New Methods
SIAM Journal on Optimization
Randomized relaxation methods for the maximum feasible subsystem problem
IPCO'05 Proceedings of the 11th international conference on Integer Programming and Combinatorial Optimization
A variant of the constant step rule for approximate subgradient methods over nonlinear networks
ICCSA'06 Proceedings of the 2006 international conference on Computational Science and Its Applications - Volume Part III
Duality in optimization and constraint satisfaction
CPAIOR'06 Proceedings of the Third international conference on Integration of AI and OR Techniques in Constraint Programming for Combinatorial Optimization Problems
Robust distributed block LMS over WSN in impulsive noise
ICDCIT'12 Proceedings of the 8th international conference on Distributed Computing and Internet Technology
Mirror descent and nonlinear projected subgradient methods for convex optimization
Operations Research Letters
Content-aware rate allocation for efficient video streaming via dynamic network utility maximization
Journal of Network and Computer Applications
Efficient algorithm for serial data fusion in wireless sensor networks
Proceedings of the 16th ACM international conference on Modeling, analysis & simulation of wireless and mobile systems
ACM Transactions on Embedded Computing Systems (TECS) - Special Section ESFH'12, ESTIMedia'11 and Regular Papers
An infeasible-point subgradient method using adaptive approximate projections
Computational Optimization and Applications
Hi-index | 0.01 |
We consider a class of subgradient methods for minimizing a convex function that consists of the sum of a large number of component functions. This type of minimization arises in a dual context from Lagrangian relaxation of the coupling constraints of large scale separable problems. The idea is to perform the subgradient iteration incrementally, by sequentially taking steps along the subgradients of the component functions, with intermediate adjustment of the variables after processing each component function. This incremental approach has been very successful in solving large differentiable least squares problems, such as those arising in the training of neural networks, and it has resulted in a much better practical rate of convergence than the steepest descent method.In this paper, we establish the convergence properties of a number of variants of incremental subgradient methods, including some that are stochastic. Based on the analysis and computational experiments, the methods appear very promising and effective for important classes of large problems. A particularly interesting discovery is that by randomizing the order of selection of component functions for iteration, the convergence rate is substantially improved.