Accelerated distributed average consensus via localized node state prediction

  • Authors:
  • Tuncer Can Aysal;Boris N. Oreshkin;Mark J. Coates

  • Affiliations:
  • Telecommunications and Signal Processing-Computer Networks Laboratory, Department of Electrical and Computer Engineering, McGill University, Montreal, QC, Canada and Department of Electrical and C ...;Telecommunications and Signal Processing-Computer Networks Laboratory, Department of Electrical and Computer Engineering, Montreal, QC, Canada;Telecommunications and Signal Processing-Computer Networks Laboratory, Department of Electrical and Computer Engineering, Montreal, QC, Canada

  • Venue:
  • IEEE Transactions on Signal Processing
  • Year:
  • 2009

Quantified Score

Hi-index 35.70

Visualization

Abstract

This paper proposes an approach to accelerate local, linear iterative network algorithms asymptotically achieving distributed average consensus. We focus on the class of algorithms in which each node initializes its "state value" to the local measurement and then at each iteration of the algorithm, updates this state value by adding a weighted sum of its own and its neighbors' state values. Provided the weight matrix satisfies certain convergence conditions, the state values asymptotically converge to the average of the measurements, but the convergence is generally slow, impeding the practical application of these algorithms. In order to improve the rate of convergence, we propose a novel method where each node employs a linear predictor to predict future node values. The local update then becomes a convex (weighted) sum of the original consensus update and the prediction; convergence is faster because redundant states are bypassed. The method is linear and poses a small computational burden. For a concrete theoretical analysis, we prove the existence of a convergent solution in the general case and then focus on one-step prediction based on the current state, and derive the optimal mixing parameter in the convex sum for this case. Evaluation of the optimal mixing parameter requires knowledge of the eigenvalues of the weight matrix, so we present a bound on the optimal parameter. Calculation of this bound requires only local information. We provide simulation results that demonstrate the validity and effectiveness of the proposed scheme. The results indicate that the incorporation of a multistep predictor can lead to convergence rates that are much faster than those achieved by an optimum weight matrix in the standard consensus framework.