Optimization and analysis of distributed averaging with memory

  • Authors:
  • Boris N. Oreshkin;Mark J. Coates;Michael G. Rabbat

  • Affiliations:
  • Department of Electrical and Computer Engineering, McGill University, Montréal, Québec, Canada;Department of Electrical and Computer Engineering, McGill University, Montréal, Québec, Canada;Department of Electrical and Computer Engineering, McGill University, Montréal, Québec, Canada

  • Venue:
  • Allerton'09 Proceedings of the 47th annual Allerton conference on Communication, control, and computing
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper analyzes the rate of convergence of a distributed averaging scheme making use of memory at each node. In conventional distributed averaging, each node computes an update based on its current state and the current states of their neighbours. Previous work observed the trajectories at each node converge smoothly and demonstrated via simulation that a predictive framework can lead to faster rates of convergence. This paper provides theoretical guarantees for a distributed averaging algorithm with memory. We analyze a scheme where updates are computed as a convex combination of two terms: (i) the usual update using only current states, and (ii) a local linear predictor term that makes use of a node's current and previous states. Although this scheme only requires one additional memory register, we prove that this approach can lead to dramatic improvements in the rate of convergence. For example, on the N-node chain topology, our approach leads to a factor of N improvement over the standard approach, and on the two-dimensional grid, our approach achieves a factor of √N improvement. Our analysis is direct and involves relating the eigenvalues of a conventional (memoryless) averaging matrix to the eigenvalues of the averaging matrix implementing the proposed scheme via a standard linearization of the quadratic eigenvalue problem. The success of our approach relies on each node using the optimal parameter for combining the two update terms. We derive a closed form expression for the optimal parameter as a function of the second largest eigenvalue of a memoryless averaging matrix, which can easily be computed in a decentralized fashion using existing methods, making our approach amenable to a practical implementation.