Handbook of theoretical computer science (vol. B)
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Planning in nondeterministic domains under partial observability via symbolic model checking
IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 1
Planning and acting in partially observable stochastic domains
Artificial Intelligence
Learning finite-state controllers for partially observable environments
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Knowledge Compilation Using Interval Automata and Applications to Planning
Proceedings of the 2010 conference on ECAI 2010: 19th European Conference on Artificial Intelligence
Constraint programming for controller synthesis
CP'11 Proceedings of the 17th international conference on Principles and practice of constraint programming
Beyond QCSP for solving control problems
CP'11 Proceedings of the 17th international conference on Principles and practice of constraint programming
Generalized planning: synthesizing plans that work for multiple environments
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Two
Hi-index | 0.00 |
Controller synthesis consists in automatically building controllers taking as inputs observation data and returning outputs guaranteeing that the controlled system satisfies some desired properties. In system specification, these properties may be safety properties specifying that some conditions must always hold. In planning, they express that the evolution of the controlled system must terminate in a goal state. In this paper, we propose a generic approach able to synthesize memoryless or finite-memory controllers for both safety-oriented and goal-oriented control problems. This approach relaxes some restrictive assumptions made by existing work on controller synthesis with non-determinism and partial observability and is shown to induce potentially significant gains. The proposed “Simulate and Branch” algorithm consists in exploring the possible evolutions of the controlled system and in adding new control elements when uncovered states are discovered. The approach developed is constraint-based in the sense that control problems are formulated using the flexibility of constraint programming languages and that our implementation uses the Gecode constraint programming library.