Communications of the ACM
On the learnability of Boolean formulae
STOC '87 Proceedings of the nineteenth annual ACM symposium on Theory of computing
Learning in the presence of malicious errors
STOC '88 Proceedings of the twentieth annual ACM symposium on Theory of computing
Computational limitations on learning from examples
Journal of the ACM (JACM)
A general lower bound on the number of examples needed for learning
Information and Computation
Crytographic limitations on learning Boolean formulae and finite automata
STOC '89 Proceedings of the twenty-first annual ACM symposium on Theory of computing
COLT '90 Proceedings of the third annual workshop on Computational learning theory
Learning DNF under the uniform distribution in quasi-polynomial time
COLT '90 Proceedings of the third annual workshop on Computational learning theory
Equivalence of models for polynomial learnability
COLT '88 Proceedings of the first annual workshop on Computational learning theory
Investigating the distribution assumptions in the Pac learning model
COLT '91 Proceedings of the fourth annual workshop on Computational learning theory
Learning monotone ku DNF formulas on product distributions
COLT '91 Proceedings of the fourth annual workshop on Computational learning theory
On learning monotone DNF formulae under uniform distributions
Information and Computation
Learning monotone log-term DNF formulas
COLT '94 Proceedings of the seventh annual conference on Computational learning theory
Exploiting random walks for learning
COLT '94 Proceedings of the seventh annual conference on Computational learning theory
Hi-index | 0.00 |
We show that 2-term DNF formulae are learnable in quadratic time using only a logarithmic number of positive examples if we assume that examples are drawn from a bounded distribution. We also show that k-term DNF formulae are learnable in polynomial time using positive and negative examples drawn from a bounded distribution.