Computationally feasible bounds for partially observed Markov decision processes
Operations Research
A survey of algorithmic methods for partially observed Markov decision processes
Annals of Operations Research
Planning and acting in partially observable stochastic domains
Artificial Intelligence
Reinforcement Learning
Neuro-Dynamic Programming
A New Algorithm for Stochastic Discrete Resource AllocationOptimization
Discrete Event Dynamic Systems
Rollout Algorithms for Stochastic Scheduling Problems
Journal of Heuristics
A Sparse Sampling Algorithm for Near-Optimal Planning in Large Markov Decision Processes
IJCAI '99 Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence
Dynamic Programming
Algorithms for partially observable markov decision processes
Algorithms for partially observable markov decision processes
The Linear Programming Approach to Approximate Dynamic Programming
Operations Research
Parallel Rollout for Online Solution of Partially Observable Markov Decision Processes
Discrete Event Dynamic Systems
On Constraint Sampling in the Linear Programming Approach to Approximate Dynamic Programming
Mathematics of Operations Research
Discretized approximations for POMDP with average cost
UAI '04 Proceedings of the 20th conference on Uncertainty in artificial intelligence
Sensor management using an active sensing approach
Signal Processing
Simulation-based Algorithms for Markov Decision Processes (Communications and Control Engineering)
Simulation-based Algorithms for Markov Decision Processes (Communications and Control Engineering)
Approximate Dynamic Programming: Solving the Curses of Dimensionality (Wiley Series in Probability and Statistics)
Dynamic Programming and Optimal Control, Vol. II
Dynamic Programming and Optimal Control, Vol. II
A POMDP framework for coordinated guidance of autonomous UAVs for multitarget tracking
EURASIP Journal on Advances in Signal Processing - Special issue on signal processing advances in robots and autonomy
Foundations and Applications of Sensor Management
Foundations and Applications of Sensor Management
Approximate stochastic dynamic programming for sensor scheduling to track multiple targets
Digital Signal Processing
Finding approximate POMDP solutions through belief compression
Journal of Artificial Intelligence Research
Reinforcement learning: a survey
Journal of Artificial Intelligence Research
Hidden Markov model multiarm bandits: a methodology for beamscheduling in multitarget tracking
IEEE Transactions on Signal Processing
Nonmyopic Multiaspect Sensing With Partially Observable Markov Decision Processes
IEEE Transactions on Signal Processing
Coordinated guidance of autonomous UAVs via nominal belief-state optimization
ACC'09 Proceedings of the 2009 conference on American Control Conference
Information theoretic adaptive tracking of epidemics in complex networks
Allerton'09 Proceedings of the 47th annual Allerton conference on Communication, control, and computing
Planning for multiple measurement channels in a continuous-state POMDP
Annals of Mathematics and Artificial Intelligence
Decentralized Guidance Control of UAVs with Explicit Optimization of Communication
Journal of Intelligent and Robotic Systems
Hi-index | 0.00 |
Adaptive sensing involves actively managing sensor resources to achieve a sensing task, such as object detection, classification, and tracking, and represents a promising direction for new applications of discrete event system methods. We describe an approach to adaptive sensing based on approximately solving a partially observable Markov decision process (POMDP) formulation of the problem. Such approximations are necessary because of the very large state space involved in practical adaptive sensing problems, precluding exact computation of optimal solutions. We review the theory of POMDPs and show how the theory applies to adaptive sensing problems. We then describe a variety of approximation methods, with examples to illustrate their application in adaptive sensing. The examples also demonstrate the gains that are possible from nonmyopic methods relative to myopic methods, and highlight some insights into the dependence of such gains on the sensing resources and environment.