A high-performance explanation-based learning algorithm
Artificial Intelligence
Explanation-Based Learning: An Alternative View
Machine Learning
Caching and Lemmaizing in Model Elimination Theorem Provers
CADE-11 Proceedings of the 11th International Conference on Automated Deduction: Automated Deduction
Model elimination and connection tableau procedures
Handbook of automated reasoning
Automated theorem proving: A logical basis (Fundamental studies in computer science)
Automated theorem proving: A logical basis (Fundamental studies in computer science)
Geometric resolution: a proof procedure based on finite model search
IJCAR'06 Proceedings of the Third international joint conference on Automated Reasoning
The model evolution calculus with equality
CADE' 20 Proceedings of the 20th international conference on Automated Deduction
The IJCAR-2004 automated theorem proving competition
AI Communications
The model evolution calculus as a first-order DPLL method
Artificial Intelligence
Logical Engineering with Instance-Based Methods
CADE-21 Proceedings of the 21st international conference on Automated Deduction: Automated Deduction
Deciding Effectively Propositional Logic Using DPLL and Substitution Sets
IJCAR '08 Proceedings of the 4th international joint conference on Automated Reasoning
Proceedings of the the 7th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineering
Deciding Effectively Propositional Logic Using DPLL and Substitution Sets
Journal of Automated Reasoning
Hi-index | 0.00 |
The Model Evolution Calculus is a proper lifting to first-order logic of the DPLL procedure, a backtracking search procedure for propositional satisfiability. Like DPLL, the ME calculus is based on the idea of incrementally building a model of the input formula by alternating constraint propagation steps with non-deterministic decision steps. One of the major conceptual improvements over basic DPLL is lemma learning, a mechanism for generating new formulae that prevent later in the search combinations of decision steps guaranteed to lead to failure. We introduce two lemma generation methods for proof procedures, with various degrees of power, effectiveness in reducing search, and computational overhead. Even if formally correct, each of these methods presents complications that do not exist at the propositional level but need to be addressed for learning to be effective in practice for . We discuss some of these issues and present initial experimental results on the performance of an implementation of the two learning procedures within our prover Darwin.