Learning Boolean formulas

  • Authors:
  • Michael Kearns;Ming Li;Leslie Valiant

  • Affiliations:
  • AT&T Bell Labs, Murray Hill, NJ;Univ. of Waterloo, Waterloo, Ont., Canada;Harvard Univ., Cambridge, MA

  • Venue:
  • Journal of the ACM (JACM)
  • Year:
  • 1994

Quantified Score

Hi-index 0.01

Visualization

Abstract

Efficient distribution-free learning of Boolean formulas from positive and negative examples is considered. It is shown that classes of formulas that are efficiently learnable from only positive examples or only negative examples have certain closure properties. A new substitution technique is used to show that in the distribution-free case learning DNF (disjunctive normal form formulas) is no harder than learning monotone DNF. We prove that monomials cannot be efficiently learned from negative examples alone, even if the negative examples are uniformly distributed. It is also shown that, if the examples are drawn from uniform distributions, then the class of DNF in which each variable occurs at most once is efficiently weakly learnable (i.e., individual examples are correctly classified with a probability larger than 1/2 + 1/p, where p is a polynomial in the relevant parameters of the learning problem). We then show an equivalence between the notion of weak learning and the notion of group learning, where a group of examples of polynomial size, either all positive or all negative, must be correctly classified with high probability.