Between MDPs and semi-MDPs: a framework for temporal abstraction in reinforcement learning
Artificial Intelligence
ConGolog, a concurrent programming language based on the situation calculus
Artificial Intelligence
Knowlege in action: logical foundations for specifying and implementing dynamical systems
Knowlege in action: logical foundations for specifying and implementing dynamical systems
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Decision-Theoretic, High-Level Agent Programming in the Situation Calculus
Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence
FLUX: A logic programming method for reasoning agents
Theory and Practice of Logic Programming
High-level robot programming in dynamic and incompletely known environments
High-level robot programming in dynamic and incompletely known environments
PCAR '06 Proceedings of the 2006 international symposium on Practical cognitive agents and robots
Extending DTGOLOG with options
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
Hierarchical solution of Markov decision processes using macro-actions
UAI'98 Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence
Hi-index | 0.00 |
Readylogis a logic-based agent programming language and combines many important features from other Gologdialects. One of the features of Readylogis to make use of decision-theoretic planning for specifying the behavior of an agent or robot. In this paper we show a method to reduce the planning time for decision-theoretic planning in the Readylogframework. Instead of planning policies on the fly over and over again, we calculate an abstract policy once and store it in a plan library. This policy can later be re-instantiated. With this plan library the on-line planning time can be significantly reduced. We compare computing policies on the fly with those stored in our plan library with examples from the robotic soccer domain. In the 2D soccer simulation league we show the significant speed-up when using our plan library approach. Moreover, the use of the plan library together with a suitable state space abstraction for the soccer domain makes it possible to apply macro-actions in an otherwise continuous domain.