Stateless distributed gradient descent for positive linear programs

  • Authors:
  • Baruch Awerbuch;Rohit Khandekar

  • Affiliations:
  • Johns Hopkins University, Baltimore, USA;IBM T.J. Watson Research Center, Yorktown Heights, USA

  • Venue:
  • STOC '08 Proceedings of the fortieth annual ACM symposium on Theory of computing
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

We develop a framework of distributed and stateless solutions for packing and covering linear programs, which are solved by multiple agents operating in a cooperative but uncoordinated manner. Our model has a separate "agent" controlling each variable and an agent is allowed to read-off the current values only of those constraints in which it has non-zero coefficients. This is a natural model for many distributed applications like flow control, maximum bipartite matching, and dominating sets. The most appealing feature of our algorithms is their simplicity and polylogarithmic convergence. For the packing LP max{cx | Ax = 0}, the algorithm associates a dual variable yi = exp[1ε * (Aix/bi -1)] for each constraint i and each agent j iteratively increases (resp. decreases) xj multiplicatively if AjT y is too small (resp. large) as compared to cj. Our algorithm starting from a feasible solution, always maintains feasibility, and computes a (1+epsilon) approximation in poly((ln (mn A_max))ε) rounds. Here m and n are number of rows and columns of A and A_max, also known as the "width" of the LP, is the ratio of maximum and minimum non-zero entries Aij/(bicj). Similar algorithm works for the covering LP min{by | AT y = c, y = 0} as well. While exponential dual variables are used in several packing/ covering LP algorithms before [25, 9, 13, 12, 26, 16], this is the first algorithm which is both stateless and has polylogarithmic convergence. Our algorithms can be thought of as applying distributed gradient descent/ascent on a carefully chosen potential. Our analysis differs from those of previous multiplicative update based algorithms and argues that while the current solution is far away from optimality, the potential function decreases/increases by a significant factor.