Combining relational learning with SMT solvers using CEGAR

  • Authors:
  • Arun Chaganty;Akash Lal;Aditya V. Nori;Sriram K. Rajamani

  • Affiliations:
  • Stanford;Microsoft Research, India;Microsoft Research, India;Microsoft Research, India

  • Venue:
  • CAV'13 Proceedings of the 25th international conference on Computer Aided Verification
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

In statistical relational learning, one is concerned with inferring the most likely explanation (or world) that satisfies a given set of weighted constraints. The weight of a constraint signifies our confidence in the constraint, and the most likely world that explains a set of constraints is simply a satisfying assignment that maximizes the weights of satisfied constraints. The relational learning community has developed specialized solvers (e.g., Alchemy and Tuffy) for such weighted constraints independently of the work on SMT solvers in the verification community. In this paper, we show how to leverage SMT solvers to significantly improve the performance of relational solvers. Constraints associated with a weight of 1 (or 0) are called axioms because they must be satisfied (or violated) by the final assignment. Axioms can create difficulties for relational solvers. We isolate the burden of axioms to SMT solvers and only lazily pass information back to the relational solver. This information can either be a subset of the axioms, or even generalized axioms (similar to predicate generalization in verification). We implemented our algorithm in a tool called Soft-Cegar that out-performs state-of-the-art relational solvers Tuffy and Alchemy over four real-world applications. We hope this work opens the door for further collaboration between relational learning and SMT solvers.