Matrix analysis
The Quadratic Eigenvalue Problem
SIAM Review
Locally constructed algorithms for distributed computations in ad-hoc networks
Proceedings of the 3rd international symposium on Information processing in sensor networks
A decentralized algorithm for spectral analysis
STOC '04 Proceedings of the thirty-sixth annual ACM symposium on Theory of computing
A scheme for robust distributed sensor fusion based on average consensus
IPSN '05 Proceedings of the 4th international symposium on Information processing in sensor networks
IEEE/ACM Transactions on Networking (TON) - Special issue on networking and information theory
Distributed consensus and linear functional calculation in networks: an observability perspective
Proceedings of the 6th international conference on Information processing in sensor networks
Polynomial filtering for fast convergence in distributed consensus
IEEE Transactions on Signal Processing
Accelerated distributed average consensus via localized node state prediction
IEEE Transactions on Signal Processing
Geographic Gossip: Efficient Averaging for Sensor Networks
IEEE Transactions on Signal Processing
Hi-index | 0.00 |
This paper analyzes the rate of convergence of a distributed averaging scheme making use of memory at each node. In conventional distributed averaging, each node computes an update based on its current state and the current states of their neighbours. Previous work observed the trajectories at each node converge smoothly and demonstrated via simulation that a predictive framework can lead to faster rates of convergence. This paper provides theoretical guarantees for a distributed averaging algorithm with memory. We analyze a scheme where updates are computed as a convex combination of two terms: (i) the usual update using only current states, and (ii) a local linear predictor term that makes use of a node's current and previous states. Although this scheme only requires one additional memory register, we prove that this approach can lead to dramatic improvements in the rate of convergence. For example, on the N-node chain topology, our approach leads to a factor of N improvement over the standard approach, and on the two-dimensional grid, our approach achieves a factor of √N improvement. Our analysis is direct and involves relating the eigenvalues of a conventional (memoryless) averaging matrix to the eigenvalues of the averaging matrix implementing the proposed scheme via a standard linearization of the quadratic eigenvalue problem. The success of our approach relies on each node using the optimal parameter for combining the two update terms. We derive a closed form expression for the optimal parameter as a function of the second largest eigenvalue of a memoryless averaging matrix, which can easily be computed in a decentralized fashion using existing methods, making our approach amenable to a practical implementation.