Markov Decision Processes: Discrete Stochastic Dynamic Programming
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Principles of Dynamic Programming: Basic Analytical and Computational Methods
Principles of Dynamic Programming: Basic Analytical and Computational Methods
Beyond NP: Arc-Consistency for Quantified Constraints
CP '02 Proceedings of the 8th International Conference on Principles and Practice of Constraint Programming
QCSP made practical by virtue of restricted quantification
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Planning in nondeterministic domains under partial observability via symbolic model checking
IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 1
QCSP-solve: a solver for quantified constraint satisfaction problems
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
Constraint-Based Controller Synthesis in Non-Deterministic and Partially Observable Domains
Proceedings of the 2010 conference on ECAI 2010: 19th European Conference on Artificial Intelligence
BlockSolve: a bottom-up approach for solving quantified CSPs
CP'06 Proceedings of the 12th international conference on Principles and Practice of Constraint Programming
Synthesis of reactive(1) designs
VMCAI'06 Proceedings of the 7th international conference on Verification, Model Checking, and Abstract Interpretation
Hi-index | 0.00 |
Quantified Constraint Satisfaction Problems (QCSP) are often claimed to be adapted to model and solve problems such as twoplayer games, planning under uncertainty, and more generally problems in which the goal is to control a dynamic system subject to uncontrolled events. This paper shows that for a quite large class of such problems, using standard QCSP or QCSP+ is not the best approach. The main reasons are that in QCSP/QCSP+, (1) the underlying notion of system state is not explicitly taken into account, (2) problems are modeled over a bounded number of steps, and (3) algorithms search for winning strategies defined as "memoryfull" policy trees instead of winning strategies defined as "memoryless" mappings from states to decisions. This paper proposes a new constraint-based framework which does not suffer from these drawbacks. Experiments show orders of magnitude improvements when compared with QCSP/QCSP+ solvers.