Learning When to Use Lazy Learning in Constraint Solving
Proceedings of the 2010 conference on ECAI 2010: 19th European Conference on Artificial Intelligence
A complete multi-valued SAT solver
CP'10 Proceedings of the 16th international conference on Principles and practice of constraint programming
Lazy explanations for constraint propagators
PADL'10 Proceedings of the 12th international conference on Practical Aspects of Declarative Languages
Explaining flow-based propagation
CPAIOR'12 Proceedings of the 9th international conference on Integration of AI and OR Techniques in Constraint Programming for Combinatorial Optimization Problems
An empirical study of learning and forgetting constraints
AI Communications - 18th RCRA International Workshop on “Experimental evaluation of algorithms for solving problems with combinatorial explosion”
ACSC '12 Proceedings of the Thirty-fifth Australasian Computer Science Conference - Volume 122
Explaining circuit propagation
Constraints
Hi-index | 0.00 |
The constraint satisfaction problem is an NP-complete problem that provides a convenient framework for expressing many computationally hard problems. In addition, domain knowledge can be efficiently integrated into CSPs, providing a potentially exponential speedup in some cases. The CSP is closely related to the satisfiability problem and many of the techniques developed for one have been transferred to the other. However, the recent dramatic improvements in SAT solvers that result from learning clauses during search have not been transferred successfully to CSP solvers. In this thesis we propose that this failure is due to a fundamental restriction of nogood learning, which is intended to be the analogous to clause learning in CSPs. This restriction means that nogood learning can exhibit a superpolynomial slowdown compared to clause learning in some cases. We show that the restriction can be lifted, delivering promising results. Integration of nogood learning in a CSP solver, however, presents an additional challenge, as a large body of domain knowledge is typically encoded in the form of domain specific propagation algorithms called global constraints. Global constraints often completely eliminate the advantages of nogood learning. We demonstrate generic methods that partially alleviate the problem irrespective of the type of global constraint. We also show that more efficient methods can be integrated into specific global constraints and demonstrate the feasibility of this approach on several widely used global constraints.