Matrix analysis
Matrix computations (3rd ed.)
The Quadratic Eigenvalue Problem
SIAM Review
Locally constructed algorithms for distributed computations in ad-hoc networks
Proceedings of the 3rd international symposium on Information processing in sensor networks
A decentralized algorithm for spectral analysis
STOC '04 Proceedings of the thirty-sixth annual ACM symposium on Theory of computing
A scheme for robust distributed sensor fusion based on average consensus
IPSN '05 Proceedings of the 4th international symposium on Information processing in sensor networks
IEEE/ACM Transactions on Networking (TON) - Special issue on networking and information theory
Distributed average consensus with least-mean-square deviation
Journal of Parallel and Distributed Computing
Distributed consensus and linear functional calculation in networks: an observability perspective
Proceedings of the 6th international conference on Information processing in sensor networks
Polynomial filtering for fast convergence in distributed consensus
IEEE Transactions on Signal Processing
Accelerated distributed average consensus via localized node state prediction
IEEE Transactions on Signal Processing
Convergence Speed in Distributed Consensus and Averaging
SIAM Journal on Control and Optimization
The capacity of wireless networks
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
Fast decentralized averaging via multi-scale gossip
DCOSS'10 Proceedings of the 6th IEEE international conference on Distributed Computing in Sensor Systems
Analysis of accelerated gossip algorithms
Automatica (Journal of IFAC)
Hi-index | 35.69 |
Distributed averaging describes a class of network algorithms for the decentralized computation of aggregate statistics. Initially, each node has a scalar data value, and the goal is to compute the average of these values at every node (the so-called average consensus problem). Nodes iteratively exchange information with their neighbors and perform local updates until the value at every node converges to the initial network average. Much previous work has focused on algorithms where each node maintains and updates a single value; every time an update is performed, the previous value is forgotten. Convergence to the average consensus is achieved asymptotically. The convergence rate is fundamentally limited by network connectivity, and it can be prohibitively slow on topologies such as grids and random geometric graphs, even if the update rules are optimized. In this paper, we provide the first theoretical demonstration that adding a local prediction component to the update rule can significantly improve the convergence rate of distributed averaging algorithms. We focus on the case where the local predictor is a linear combination of the node's current and previous values (i.e., two memory taps), and our update rule computes a combination of the predictor and the usual weighted linear combination of values received from neighboring nodes. We derive the optimal mixing parameter for combining the predictor with the neighbors' values, and conduct a theoretical analysis of the improvement in convergence rate that can be achieved using this acceleration methodology. For a chain topology on N nodes, this leads to a factor of N improvement over standard consensus, and for a two-dimensional grid, our approach achieves a factor of √N improvement.