Artificial Intelligence
Proceedings of the first international conference on Principles of knowledge representation and reasoning
Efficient checking of temporal integrity constraints using bounded history encoding
ACM Transactions on Database Systems (TODS)
Planning under time constraints in stochastic domains
Artificial Intelligence - Special volume on planning and scheduling
An algorithm for probabilistic planning
Artificial Intelligence - Special volume on planning and scheduling
Using temporal logics to express search control knowledge for planning
Artificial Intelligence
Stochastic dynamic programming with factored representations
Artificial Intelligence
LAO: a heuristic search algorithm that finds solutions with loops
Artificial Intelligence - Special issue on heuristic search in artificial intelligence
Planning for temporally extended goals
Annals of Mathematics and Artificial Intelligence
Planning with a language for extended goals
Eighteenth national conference on Artificial intelligence
Planning as model checking for extended goals in non-deterministic domains
IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 1
Learning to act using real-time dynamic programming
Artificial Intelligence
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 2
Structured solution methods for non-Markovian decision processes
AAAI'97/IAAI'97 Proceedings of the fourteenth national conference on artificial intelligence and ninth conference on Innovative applications of artificial intelligence
SPUDD: stochastic planning using decision diagrams
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Strong planning under partial observability
Artificial Intelligence
Decision-theoretic planning with non-Markovian rewards
Journal of Artificial Intelligence Research
Strong planning under partial observability
Artificial Intelligence
Comparison between two languages used to express planning goals: CTL and EAGLE
PRICAI'06 Proceedings of the 9th Pacific Rim international conference on Artificial intelligence
Non-Markovian control in the Situation Calculus
Artificial Intelligence
Hi-index | 0.00 |
A popular approach to solving a decision process with non-Markovian rewards (NMRDP) is to exploit a compact representation of the reward function to automatically translate the NMRDP into an equivalent Markov decision process (MDP) amenable to our favorite MDP solution method. The contribution of this paper is a representation of non-Markovian reward functions and a translation into MDP aimed at making the best possible use of state-based anytime algorithms as the solution method. By explicitly constructing and exploring only parts of the state space, these algorithms are able to trade computation time for policy quality, and have proven quite effective in dealing with large MDPs. Our representation extends future linear temporal logic (FLTL) to express rewards. Our translation has the effect of embedding modelchecking in the solution method. It results in an MDP of the minimal size achievable without stepping outside the anytime framework, and consequently in better policies by the deadline.