Boosting combinatorial search through randomization
AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
Radio Link Frequency Assignment
Constraints
Neighborhood-Based Variable Ordering Heuristics for the Constraint Satisfaction Problem
CP '01 Proceedings of the 7th International Conference on Principles and Practice of Constraint Programming
Handbook of Constraint Programming (Foundations of Artificial Intelligence)
Handbook of Constraint Programming (Foundations of Artificial Intelligence)
Extracting MUCs from Constraint Networks
Proceedings of the 2006 conference on ECAI 2006: 17th European Conference on Artificial Intelligence August 29 -- September 1, 2006, Riva del Garda, Italy
Nogood recording from restarts
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Extracting constraint satisfaction subproblems
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 1
A hybrid paradigm for adaptive parallel search
CP'12 Proceedings of the 18th international conference on Principles and Practice of Constraint Programming
Hi-index | 0.00 |
Systematic tree search is often used in conjunction with inference and restarts when solving challenging Constraint Satisfaction Problems (CSPs). In order to improve the efficiency of constraint solving, techniques that learn during search, such as constraint weighting and nogood learning, have been proposed. Constraint weights can be used to guide heuristic choices. Nogood assignments can be avoided by adding additional constraints. Both of these techniques can be used in either one-shot systematic search, or in a setting in which we frequently restart the search procedure. In this paper we propose a third way of learning during search, generalising previous work by Freuder and Hubbe. Specifically, we show how, in a restart context, we can guarantee that we avoid revisiting a previously visited region of the search space by extracting it from the problem. Likewise, we can avoid revisiting inconsistent regions of the search space by extracting inconsistent subproblems, based on a significant improvement upon Freuder and Hubbe's approach. A major empirical result of this paper is that our approach significantly outperforms MAC combined with weighted degree heuristics and restarts on challenging constraint problems. Our approach can be regarded as an efficient form of learning that does not rely on constraint propagation. Instead, we rely on a reformulation of a CSP into an equivalent set of CSPs, none of which contain any of the search space we wish to avoid.