Using temporal logics to express search control knowledge for planning
Artificial Intelligence
LAO: a heuristic search algorithm that finds solutions with loops
Artificial Intelligence - Special issue on heuristic search in artificial intelligence
Symbolic heuristic search for factored Markov decision processes
Eighteenth national conference on Artificial intelligence
Exploiting structure in policy construction
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 2
Structured solution methods for non-Markovian decision processes
AAAI'97/IAAI'97 Proceedings of the fourteenth national conference on artificial intelligence and ninth conference on Innovative applications of artificial intelligence
Exploiting first-order regression in inductive policy selection
UAI '04 Proceedings of the 20th conference on Uncertainty in artificial intelligence
Decision-theoretic planning with non-Markovian rewards
Journal of Artificial Intelligence Research
Hi-index | 0.01 |
This paper examines a number of solution methods for decision processes with non-Markovian rewards(NMRDPs). They all exploit a temporal logic specification of the reward function to automatically translate the NMRDP into an equivalent Markov decision process (MDP) amenable to well-known MDP solution methods. They differ however in the representation of the target MDP and the class of MDP solution methods to which they are suited. As a result, they adopt different temporal logics and different translations. Unfortunately, no implementation of these methods nor experimental let alone comparative results have ever been reported. This paper is the first step towards filling this gap. We describe an integrated system for solving NMRDPs which implements these methods and several variants under a common interface; we use it to compare the various approaches and identify certain problem features favouring one over the other.