A survey of algorithmic methods for partially observed Markov decision processes
Annals of Operations Research
Neuro-Dynamic Programming
Exact and approximate algorithms for partially observable markov decision processes
Exact and approximate algorithms for partially observable markov decision processes
Heuristic search value iteration for POMDPs
UAI '04 Proceedings of the 20th conference on Uncertainty in artificial intelligence
Exploiting structure to efficiently solve large scale partially observable markov decision processes
Exploiting structure to efficiently solve large scale partially observable markov decision processes
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Point-Based Value Iteration for Continuous POMDPs
The Journal of Machine Learning Research
Spatial Planning: A Configuration Space Approach
IEEE Transactions on Computers
Approximate Dynamic Programming: Solving the Curses of Dimensionality (Wiley Series in Probability and Statistics)
Partially Observable Markov Decision Process Approximations for Adaptive Sensing
Discrete Event Dynamic Systems
Approximate stochastic dynamic programming for sensor scheduling to track multiple targets
Digital Signal Processing
Value-function approximations for partially observable Markov decision processes
Journal of Artificial Intelligence Research
Speeding up the convergence of value iteration in partially observable Markov decision processes
Journal of Artificial Intelligence Research
Finding approximate POMDP solutions through belief compression
Journal of Artificial Intelligence Research
Perseus: randomized point-based value iteration for POMDPs
Journal of Artificial Intelligence Research
Anytime point-based approximations for large POMDPs
Journal of Artificial Intelligence Research
Online planning algorithms for POMDPs
Journal of Artificial Intelligence Research
Planning and acting in partially observable stochastic domains
Artificial Intelligence
Artificial Intelligence: A Modern Approach
Artificial Intelligence: A Modern Approach
Bayesian reinforcement learning in continuous pomdps with Gaussian processes
IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
Simplifying mixture models through function approximation
IEEE Transactions on Neural Networks
Planning in partially-observable switching-mode continuous domains
Annals of Mathematics and Artificial Intelligence
IEEE Transactions on Signal Processing
Hi-index | 0.00 |
Continuous-state partially observable Markov decision processes (POMDPs) are an intuitive choice of representation for many stochastic planning problems with a hidden state. We consider a continuous-state POMDPs with finite action and observation spaces, where the POMDP is parametrised by weighted sums of Gaussians, or Gaussian mixture models (GMMs). In particular, we study the problem of optimising the selection of measurement channel in such a framework. A new error bound for a point-based value iteration algorithm is derived, and a method for constructing a subset of belief states that attempts to reduce the error bound is implemented. In the experiments, applying continuous-state POMDPs for optimal selection of the measurement channel is demonstrated, and the performance of three GMM simplification methods is compared. Convergence of a point-based value iteration algorithm is investigated by considering various metrics for the obtained control policies.