Probabilistic analysis of two heuristics for the 3-satisfiability problem
SIAM Journal on Computing
Many hard examples for resolution
Journal of the ACM (JACM)
A threshold for unsatisfiability
Journal of Computer and System Sciences
On the satisfiability and maximum satisfiability of random 3-CNF formulas
SODA '93 Proceedings of the fourth annual ACM-SIAM Symposium on Discrete algorithms
Setting 2 variables at a time yields a new lower bound for random 3-SAT (extended abstract)
STOC '00 Proceedings of the thirty-second annual ACM symposium on Theory of computing
Bounding the unsatisfiability threshold of random 3-SAT
Random Structures & Algorithms
The scaling window of the 2-SAT transition
Random Structures & Algorithms
A note on random 2-SAT with prescribed literal degrees
SODA '02 Proceedings of the thirteenth annual ACM-SIAM symposium on Discrete algorithms
Optimal myopic algorithms for random 3-SAT
FOCS '00 Proceedings of the 41st Annual Symposium on Foundations of Computer Science
The Size of the Giant Component of a Random Graph with a Given Degree Sequence
Combinatorics, Probability and Computing
Mick gets some (the odds are on his side) (satisfiability)
SFCS '92 Proceedings of the 33rd Annual Symposium on Foundations of Computer Science
Hi-index | 0.00 |
3-SAT is a canonical NP-complete problem: satisfiable and unsatisfiable instances cannot generally be distinguished in polynomial time. However, random 3-SAT formulas show a phase transition: for any large number of variables n, sparse random formulas (with m 驴 3.145n clauses) are almost always satisfiable, dense ones (with m 驴 4.596n clauses) are almost always unsatisfiable, and the transition occurs sharply when m/n crosses some threshold. It is believed that the limiting threshold is around 4.2, but it is not even known that a limit exists. Proofs of the satisfiability of sparse instances have come from analyzing heuristics: the better the heuristic analyzed, the denser the instances that can be proved satisfiable with high probability. To date, the good heuristics have all been extensions of unit-clause resolution, all expressible within a common framework and analyzable in a uniform manner through the differential equation method. Any algorithm expressible in this framework can be "tuned" optimally. This tuning requires extending the analysis via the differential equation method, and making use a "maximum-density multiple-choice knapsack" problem. The structure of optimal knapsack solutions elegantly characterizes the choices made by an optimized algorithm. Optimized algorithms result in improving the known satisfiability bound from density 3.145 to 3.26. Many open problems remain. It is non-trivial to extend the methods to 4-SAT and beyond. If results are to be applicable to "real-world" 3-SAT instances, then the theory should be extended to formulas that need not be uniformly random, but obey some weaker conditions. Also, there is theoretical evidence that in the unsatisfiable regime it is difficult to prove the unsatisfiability of a given formula, while in the known region of satisfiability, linear-time algorithms produce satisfying assignments with high probability. Is the unsatisfiable regime truly hard, and is the whole of the satisfiable regime truly easy? In particular, as the scope of myopic, local algorithms is expanded so that they examine more and more variables, can such algorithms solve random instances arbitrarily close the the threshold density?.