An Upper Bound on the Loss from Approximate Optimal-Value Functions
Machine Learning
An Algorithm for Finding Best Matches in Logarithmic Expected Time
ACM Transactions on Mathematical Software (TOMS)
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Neuro-Dynamic Programming
Variable Resolution Discretization in Optimal Control
Machine Learning
Dynamic Programming
Dynamic programming for structured continuous Markov decision problems
UAI '04 Proceedings of the 20th conference on Uncertainty in artificial intelligence
Planning under continuous time and resource uncertainty: a challenge for AI
UAI'02 Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence
Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems
Adaptive multi-robot wide-area exploration and mapping
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
RIAACT: a robust approach to adjustable autonomy for human-multiagent teams
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 3
Solving Decentralized Continuous Markov Decision Problems with Structured Reward
KI '07 Proceedings of the 30th annual German conference on Advances in Artificial Intelligence
Improving adjustable autonomy strategies for time-critical domains
Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
Functional value iteration for decision-theoretic planning with general utility functions
AAAI'06 proceedings of the 21st national conference on Artificial intelligence - Volume 2
Towards faster planning with continuous resources in stochastic domains
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 2
A heuristic search approach to planning with continuous resources in stochastic domains
Journal of Artificial Intelligence Research
A fast analytical algorithm for solving Markov decision processes with real-valued resources
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Intensional dynamic programming. A Rosetta stone for structured dynamic programming
Journal of Algorithms
Function allocation for NextGen airspace via agents
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: Industry track
On-line robot execution monitoring using probabilistic action duration
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
Planning in stochastic domains for multiple agents with individual continuous resource state-spaces
Autonomous Agents and Multi-Agent Systems
Algorithms and mechanisms for procuring services with uncertain durations using redundancy
Artificial Intelligence
Solving hybrid markov decision processes
MICAI'06 Proceedings of the 5th Mexican international conference on Artificial Intelligence
Continuous time planning for multiagent teams with temporal constraints
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume One
An intelligent broker agent for energy trading: an MDP approach
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Robust optimization for hybrid MDPs with state-dependent noise
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Hi-index | 0.00 |
Solving Markov decision processes (MDPs) with continuous state spaces is a challenge due to, among other problems. the well-known curse of dimensionality. Nevertheless, numerous real-world applications such as transportation planning and telescope observation scheduling exhibit a critical dependence on continuous states. Current approaches to continuous-state MDPs include discretizing their transition models. In this paper, we propose and study an alternative, discretization-free approach we call lazy approximation. Empirical study shows that lazy approximation performs much better than discretization, and we successfully applied this new technique to a more realistic planetary rover planning problem.