Nonsystematic Search and No-Good Learning

  • Authors:
  • E. Thomas Richards;Barry Richards

  • Affiliations:
  • IC-Parc, Imperial College, London SW7 2AZ. e-mail etr@icparc.ic.ac.uk;IC-Parc, Imperial College, London SW7 2AZ. e-mail ebr@icparc.ic.ac.uk

  • Venue:
  • Journal of Automated Reasoning
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

Nonsystematic search algorithms seem, in general, to be well suited to large-scale problems with many solutions. However, they tend to perform badly for problems with few solutions, and they cannot be used for insoluble problems, since they are incomplete.Here we present a new algorithm, ilearn-SAT, that, although based on nonsystematic search, is complete. Completeness is realized through a process of no-good learning, learning-by-merging. This requires exponential space in the worst case. We show, nevertheless, that ilearn-SAT performs very well on certain SAT problems that are tightly constrained or insoluble. Indeed, its performance generally approximates the best SAT algorithms and does much better at lower clause densities. iLearn-SAT also maintains much of the efficient performance of nonsystematic search for large-scale problems with many solutions, at least relative to backtrack search algorithms.These results indicate that the burden on memory, imposed by no-good learning, is not generally a problem for ilearn-SAT. This is perhaps surprising in view of previous work. What is even more surprising is the scalability of ilearn-SAT. For some types of problem it scales very much better than the nearest competitive algorithm. There are other types, however, for which this is not the case.The performance profile of ilearn-SAT emerges from an experimental methodology related to the one outlined by Mammen and Hogg in 1997.