The complexity of Markov decision processes
Mathematics of Operations Research
Parallel and distributed computation: numerical methods
Parallel and distributed computation: numerical methods
Elements of information theory
Elements of information theory
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Algorithms for sequential decision-making
Algorithms for sequential decision-making
Tractable inference for complex stochastic processes
UAI'98 Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence
Reinforcement Learning with Factored States and Actions
The Journal of Machine Learning Research
An online POMDP algorithm for complex multiagent environments
Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
APPSSAT: Approximate probabilistic planning using stochastic satisfiability
International Journal of Approximate Reasoning
Partially observable Markov decision processes with imprecise parameters
Artificial Intelligence
Value-function approximations for partially observable Markov decision processes
Journal of Artificial Intelligence Research
Online planning algorithms for POMDPs
Journal of Artificial Intelligence Research
The value of observation for monitoring dynamic systems
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
AEMS: an anytime online search algorithm for approximate policy refinement in large POMDPs
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Operant matching as a nash equilibrium of an intertemporal game
Neural Computation
Reinforcement learning in POMDPs without resets
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
Efficient planning in large POMDPs through policy graph based factorized approximations
ECML PKDD'10 Proceedings of the 2010 European conference on Machine learning and knowledge discovery in databases: Part III
Efficient planning under uncertainty with macro-actions
Journal of Artificial Intelligence Research
Fast planning in stochastic games
UAI'00 Proceedings of the Sixteenth conference on Uncertainty in artificial intelligence
Value-directed belief state approximation for POMDPs
UAI'00 Proceedings of the Sixteenth conference on Uncertainty in artificial intelligence
Value-directed sampling methods for monitoring POMDPs
UAI'01 Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence
Feature extraction for decision-theoretic planning in partially observable environments
ICANN'06 Proceedings of the 16th international conference on Artificial Neural Networks - Volume Part I
Point-based online value iteration algorithm in large POMDP
Applied Intelligence
Hi-index | 0.00 |
We are interested in the problem of planning for factored POMDPs. Building on the recent results of Kearns, Mansour and Ng, we provide a planning algorithm for factored POMDPs that exploits the accuracy efficiency tradeoff in the belief state simplification introduced by Boyen and Koller.