A model for reasoning about persistence and causation
Computational Intelligence
Proceedings of the first international conference on Principles of knowledge representation and reasoning
Handbook of theoretical computer science (vol. B)
Planning and control
Using abstractions for decision-theoretic planning with time constraints
AAAI'94 Proceedings of the twelfth national conference on Artificial intelligence (vol. 2)
An algorithm for probabilistic least-commitment planning
AAAI'94 Proceedings of the twelfth national conference on Artificial intelligence (vol. 2)
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Exploiting structure in policy construction
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 2
Non-Markovian control in the situation calculus
Eighteenth national conference on Artificial intelligence
Strong planning under partial observability
Artificial Intelligence
A computational framework for package planning
International Journal of Knowledge-based and Intelligent Engineering Systems
Non-monotonic temporal logics that facilitate elaboration tolerant revision of goals
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 1
Decision-theoretic planning with non-Markovian rewards
Journal of Artificial Intelligence Research
Strong planning under partial observability
Artificial Intelligence
Non-Markovian control in the Situation Calculus
Artificial Intelligence
Anytime state-based solution methods for decision processes with non-Markovian rewards
UAI'02 Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence
Implementation and comparison of solution methods for decision processes with non-markovian rewards
UAI'03 Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence
Hi-index | 0.00 |
Markov Decision Processes (MDPs), currently a popular method for modeling and solving decision theoretic planning problems, are limited by the Markovian assumption: rewards and dynamics depend on the current state only, and not on previous history. Non-Markovian decision processes (NMDPs) can also be defined, but then the more tractable solution techniques developed for MDP's cannot be directly applied. In this paper, we show how an NMDP, in which temporal logic is used to specify history dependence, can be automatically converted into an equivalent MDP by adding appropriate temporal variables. The resulting MDP can be represented in a structured fashion and solved using structured policy construction methods. In many cases, this offers significant computational advantages over previous proposals for solving NMDPs.