Higher dimensional consensus: learning in large-scale networks

  • Authors:
  • Usman A. Khan;Soummya Kar;José M. F. Moura

  • Affiliations:
  • Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, PA and Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA;Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA;Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA

  • Venue:
  • IEEE Transactions on Signal Processing
  • Year:
  • 2010

Quantified Score

Hi-index 35.69

Visualization

Abstract

The paper considers higher dimensional consensus (HDC). HDC is a general class of linear distributed algorithms for large-scale networks that generalizes average-consensus and includes other interesting distributed algorithms, like sensor localization, leader-follower algorithms in multiagent systems, or distributed Jacobi algorithm. In HDC, the network nodes are partitioned into "anchors," nodes whose states are fixed over the HDC iterations, and "sensors," nodes whose states are updated by the algorithm. The paper starts by briefly considering what we call the forward problem by presenting the conditions for HDC to converge, the limiting state to which it converges, and what is its convergence rate. The main focus of the paper is the inverse or design problem, i.e., learning the weights or parameters of the HDC so that the algorithm converges to a desired prespecified state. This generalizes the well-known problem of designing the weights in average-consensus. We pose learning as a constrained nonconvex optimization problem that we cast in the framework of multiobjective optimization (MOP) and to which we apply Pareto optimality. We derive the solution to the learning problem by proving relevant properties satisfied by the MOP solutions and by the Pareto front. Finally, the paper shows how the MOP approach leads to interesting tradeoffs (speed of convergence versus performance) arising in resource constrained networks. Simulation studies illustrate our approach for a leader-follower architecture in multiagent systems.