Markov Decision Processes: Discrete Stochastic Dynamic Programming
Markov Decision Processes: Discrete Stochastic Dynamic Programming
The Techsat-21 autonomous space science agent
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 2
Neuro-Dynamic Programming
AI Magazine
Practical Reinforcement Learning in Continuous Spaces
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Planning under continuous time and resource uncertainty: a challenge for AI
UAI'02 Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence
Techniques for generating optimal, robust plans when temporal uncertainty is present
AAAI'06 proceedings of the 21st national conference on Artificial intelligence - Volume 2
Hi-index | 0.00 |
A planning system must reason about the uncertainty of continuous variables in order to accurately project the possible system state over time. A method is devised for directly reasoning about the uncertainty in continuous activity duration and resource usage for planning problems. By representing random variables as parametric distributions, computing projected system state can be simplified. Common approximations and novel methods are compared for over-constrained and lightly constrained domains within an iterative repair planner. Results show improvements in robustness over the conventional non-probabilistic representation by reducing the number of constraint violations during execution. The improvement is more significant for larger problems and those with higher resource subscription levels but diminishes as the system is allowed to accept higher risk levels.