Further reflections on a theory for basic algorithms

  • Authors:
  • Allan Borodin

  • Affiliations:
  • Department of Computer Science, University of Toronto

  • Venue:
  • AAIM'06 Proceedings of the Second international conference on Algorithmic Aspects in Information and Management
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Can we optimally solve Max2SAT in (say) time (|F| log|F|) where |F| is the length of formula F. Of course, since Max2SAT is NP-complete, we can confidently rely on our strongly held belief that no NP-hard problem can be solved optimally in polynomial time. But obtaining unconditional complexity lower bounds (even linear or near linear bounds) remains the central challenge of complexity theory. In the complementary fields of complexity theory and that of algorithm design and analysis, we ask questions such as “what is the best polynomial time approximation ratio” that can be achieved for Max2SAT. The best negative results are derived from the beautiful development of PCP proofs. In terms of obtaining better approximation algorithms, we appeal to a variety of algorithmic techniques, including very basic techniques such as greedy algorithms, dynamic programming (with scaling), divide and conquer, local search and some more technically involved methods such as LP relaxation and randomized rounding, semi-definite programming (see [34] and [30] for an elegant presentation of these randomized methods and the concept of derandomization using conditional expectations). A more refined question might ask “what is the best approximation ratio (for a given problem such as Max2SAT) that can be obtained in (say) time O(n logn)” where n is the length of the input in some standard representation of the problem. What algorithmic techniques should we consider if we are constrained to time O(n logn)?