Chaff: engineering an efficient SAT solver
Proceedings of the 38th annual Design Automation Conference
Symbolic Model Checking without BDDs
TACAS '99 Proceedings of the 5th International Conference on Tools and Algorithms for Construction and Analysis of Systems
BerkMin: A Fast and Robust Sat-Solver
Proceedings of the conference on Design, automation and test in Europe
ParamILS: an automatic algorithm configuration framework
Journal of Artificial Intelligence Research
Applying logic synthesis for speeding up SAT
SAT'07 Proceedings of the 10th international conference on Theory and applications of satisfiability testing
CP'07 Proceedings of the 13th international conference on Principles and practice of constraint programming
Improvements to hybrid incremental SAT algorithms
SAT'08 Proceedings of the 11th international conference on Theory and applications of satisfiability testing
Tradeoffs in the empirical evaluation of competing algorithm designs
Annals of Mathematics and Artificial Intelligence
Statistical methodology for comparison of SAT solvers
SAT'10 Proceedings of the 13th international conference on Theory and Applications of Satisfiability Testing
Quantifying homogeneity of instance sets for algorithm configuration
LION'12 Proceedings of the 6th international conference on Learning and Intelligent Optimization
Algorithm runtime prediction: Methods & evaluation
Artificial Intelligence
Hi-index | 0.00 |
Modern SAT solvers are highly dependent on heuristics. Therefore, benchmarking is of prime importance in evaluating the performances of different solvers. However, relevant benchmarking is not necessarily straightforward. We present our experiments using the IBM CNF Benchmark on several SAT solvers. Using the results, we attempt to define guidelines for a relevant benchmarking methodology, using SAT solvers for real life BMC applications.