Using more reasoning to improve #SAT solving

  • Authors:
  • Jessica Davies;Fahiem Bacchus

  • Affiliations:
  • Department of Computer Science, University of Toronto, Toronto, Canada;Department of Computer Science, University of Toronto, Toronto, Canada

  • Venue:
  • AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 1
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Many real-world problems, including inference in Bayes Nets, can be reduced to #SAT, the problem of counting the number of models of a propositional theory. This has motivated the need for efficient #SAT solvers. Currently, such solvers utilize a modified version of DPLL that employs decomposition and caching, techniques that significantly increase the time it takes to process each node in the search space. In addition, the search space is significantly larger than when solving SAT since we must continue searching even after the first solution has been found. It has previously been demonstrated that the size of a DPLL search tree can be significantly reduced by doing more reasoning at each node. However, for SAT the reductions gained are often not worth the extra time required. In this paper we verify the hypothesis that for #SAT this balance changes. In particular, we show that additional reasoning can reduce the size of a #SAT solver's search space, that this reduction cannot always be achieved by the already utilized technique of clause learning, and that this additional reasoning can be cost effective.