An adaptive projected subgradient approach to learning in diffusion networks
IEEE Transactions on Signal Processing
Distributed multiagent learning with a broadcast adaptive subgradient method
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
A Randomized Incremental Subgradient Method for Distributed Optimization in Networked Systems
SIAM Journal on Optimization
Incremental Subgradients for Constrained Convex Optimization: A Unified Framework and New Methods
SIAM Journal on Optimization
Structure and dynamics of information pathways in online media
Proceedings of the sixth ACM international conference on Web search and data mining
Hi-index | 0.00 |
An incremental aggregated gradient method for minimizing a sum of continuously differentiable functions is presented. The method requires a single gradient evaluation per iteration and uses a constant step size. For the case that the gradient is bounded and Lipschitz continuous, we show that the method visits infinitely often regions in which the gradient is small. Under certain unimodality assumptions, global convergence is established. In the quadratic case, a global linear rate of convergence is shown. The method is applied to distributed optimization problems arising in wireless sensor networks, and numerical experiments compare the new method with other incremental gradient methods.