Enhancement schemes for constraint processing: backjumping, learning, and cutset decomposition
Artificial Intelligence
An empirical evaluation of knowledge compilation by theory approximation
AAAI '94 Proceedings of the twelfth national conference on Artificial intelligence (vol. 1)
The complexity of logic-based abduction
Journal of the ACM (JACM)
GRASP—a new search algorithm for satisfiability
Proceedings of the 1996 IEEE/ACM international conference on Computer-aided design
Learning to ask relevant questions
Artificial Intelligence
Lemma and cut strategies for propositional model elimination
Annals of Mathematics and Artificial Intelligence
Nonsystematic Search and No-Good Learning
Journal of Automated Reasoning
SATO: An Efficient Propositional Prover
CADE-14 Proceedings of the 14th International Conference on Automated Deduction
Learning for sat and minsat, and algorithms for quantified sat and minsat
Learning for sat and minsat, and algorithms for quantified sat and minsat
A survey on knowledge compilation
AI Communications
Pushing the envelope: planning, propositional logic, and stochastic search
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 2
Using CSP look-back techniques to solve real-world SAT instances
AAAI'97/IAAI'97 Proceedings of the fourteenth national conference on artificial intelligence and ninth conference on Innovative applications of artificial intelligence
Hi-index | 0.00 |
This paper describes learning in a compiler for algorithms solving classes of the logic minimization problem MINSAT, where the underlying propositional formula is in conjunctive normal form (CNF) and where costs are associated with the True/False values of the variables. Each class consists of all instances that may be derived from a given propositional formula and costs for True/False values by fixing or deleting variables, and by deleting clauses. The learning step begins once the compiler has constructed a solution algorithm for a given class. The step applies that algorithm to comparatively few instances of the class, analyses the performance of the algorithm on these instances, and modifies the underlying propositional formula, with the goal that the algorithm will perform much better on all instances of the class.