On the relation between answer set and sat procedures (or, between cmodels and smodels)

  • Authors:
  • Enrico Giunchiglia;Marco Maratea

  • Affiliations:
  • STAR-Lab, DIST, University of Genova, Genova, Italy;STAR-Lab, DIST, University of Genova, Genova, Italy

  • Venue:
  • ICLP'05 Proceedings of the 21st international conference on Logic Programming
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Answer Set Programming (ASP) is a declarative paradigm for solving search problems. State-of-the-art systems for ASP include smodels,dlv, cmodels, and assat. In this paper, our goal is to study the computational properties of such systems both from a theoretical and an experimental point of view. From the theoretical point of view, we start our analysis with cmodels and smodels. We show that though these two systems are apparently different, they are equivalent on a significant class of programs, called tight. By equivalent, we mean that they explore search trees with the same branching nodes, (assuming, of course, a same branching heuristic). Given our result and that the cmodels search engine is based on the Davis Logemann Loveland procedure (dll) for propositional satisfiability (SAT), we are able to establish that many of the properties holding for dll also hold for cmodels and thus for smodels. On the other hand, we also show that there exist classes of non-tight programs which are exponentially hard for cmodels, but “easy” for smodels. We also discuss how our results extend to other systems. From the experimental point of view, we analyze which combinations of reasoning strategies work best on which problems. In particular, we extended cmodels in order to obtain a unique platform with a variety of reasoning strategies, and conducted an extensive experimental analysis on “small” randomly generated and on “large” non randomly generated programs. Considering these programs, our results show that the reasoning strategies that work best on the small problems are completely different from the ones that are best on the large ones. These results point out, e.g., that we can hardly expect to develop one solver with the best performances on all the categories of problems. As a consequence, (i) developers should focus on specific classes of benchmarks, and (ii) benchmarking should take into account whether solvers have been designed for specific classes of programs.