Parallel and distributed computation: numerical methods
Parallel and distributed computation: numerical methods
Convex Optimization
Assessing cooperation in human control of heterogeneous robots
Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction
Higher dimensional consensus algorithms in sensor networks
ICASSP '09 Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and Signal Processing
Distributed sensor localization in random environments using minimal number of anchor nodes
IEEE Transactions on Signal Processing
DILAND: an algorithm for distributed sensor localization with noisy distance measurements
IEEE Transactions on Signal Processing
Distributing the Kalman Filter for Large-Scale Systems
IEEE Transactions on Signal Processing - Part I
Asymptotic noise analysis of high dimensional consensus
Asilomar'09 Proceedings of the 43rd Asilomar conference on Signals, systems and computers
Linear high-order distributed average consensus algorithm in wireless sensor networks
EURASIP Journal on Advances in Signal Processing
Leader-follower consensus over numerosity-constrained random networks
Automatica (Journal of IFAC)
Think globally, act locally: on the reshaping of information landscapes
Proceedings of the 12th international conference on Information processing in sensor networks
Hi-index | 35.69 |
The paper considers higher dimensional consensus (HDC). HDC is a general class of linear distributed algorithms for large-scale networks that generalizes average-consensus and includes other interesting distributed algorithms, like sensor localization, leader-follower algorithms in multiagent systems, or distributed Jacobi algorithm. In HDC, the network nodes are partitioned into "anchors," nodes whose states are fixed over the HDC iterations, and "sensors," nodes whose states are updated by the algorithm. The paper starts by briefly considering what we call the forward problem by presenting the conditions for HDC to converge, the limiting state to which it converges, and what is its convergence rate. The main focus of the paper is the inverse or design problem, i.e., learning the weights or parameters of the HDC so that the algorithm converges to a desired prespecified state. This generalizes the well-known problem of designing the weights in average-consensus. We pose learning as a constrained nonconvex optimization problem that we cast in the framework of multiobjective optimization (MOP) and to which we apply Pareto optimality. We derive the solution to the learning problem by proving relevant properties satisfied by the MOP solutions and by the Pareto front. Finally, the paper shows how the MOP approach leads to interesting tradeoffs (speed of convergence versus performance) arising in resource constrained networks. Simulation studies illustrate our approach for a leader-follower architecture in multiagent systems.