Dynamic Programming and Optimal Control
Dynamic Programming and Optimal Control
A POMDP framework for coordinated guidance of autonomous UAVs for multitarget tracking
EURASIP Journal on Advances in Signal Processing - Special issue on signal processing advances in robots and autonomy
Foundations and Applications of Sensor Management
Foundations and Applications of Sensor Management
Partially Observable Markov Decision Process Approximations for Adaptive Sensing
Discrete Event Dynamic Systems
Approximate stochastic dynamic programming for sensor scheduling to track multiple targets
Digital Signal Processing
Hi-index | 0.00 |
We apply the theory of partially observable Markov decision processes (POMDPs) to the design of guidance algorithms for controlling the motion of unmanned aerial vehicles (UAVs) with on-board sensors for tracking multiple ground targets. While POMDPs are intractable to optimize exactly, principled approximation methods can be devised based on Bellman's principle. We introduce a new approximation method called nominal belief-state optimization (NBO). We show that NBO, combined with other application-specific approximations and techniques within the POMDP framework, produces a practical design that coordinates the UAVs to achieve good longterm mean-squared-error tracking performance in the presence of occlusions and dynamic constraints.