Communications of the ACM
On the learnability of Boolean formulae
STOC '87 Proceedings of the nineteenth annual ACM symposium on Theory of computing
Computational limitations on learning from examples
Journal of the ACM (JACM)
Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
Learning 2u DNF formulas and ku decision trees
COLT '91 Proceedings of the fourth annual workshop on Computational learning theory
Exact learning of read-twice DNF formulas (extended abstract)
SFCS '91 Proceedings of the 32nd annual symposium on Foundations of computer science
Fast learning of k-term DNF formulas with queries
STOC '92 Proceedings of the twenty-fourth annual ACM symposium on Theory of computing
Learning read-once formulas with queries
Journal of the ACM (JACM)
Linear time deterministic learning of k-term DNF
COLT '93 Proceedings of the sixth annual conference on Computational learning theory
Asking questions to minimize errors
COLT '93 Proceedings of the sixth annual conference on Computational learning theory
Computers and Intractability: A Guide to the Theory of NP-Completeness
Computers and Intractability: A Guide to the Theory of NP-Completeness
Machine Learning
Machine Learning
How many queries are needed to learn?
STOC '95 Proceedings of the twenty-seventh annual ACM symposium on Theory of computing
Generalized teaching dimensions and the query complexity of learning
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
How many queries are needed to learn?
Journal of the ACM (JACM)
COLT '96 Proceedings of the ninth annual conference on Computational learning theory
Generating all maximal independent sets of bounded-degree hypergraphs
COLT '97 Proceedings of the tenth annual conference on Computational learning theory
A Simulated Annealing-Based Learning Algorithm for Boolean DNF
AI '99 Proceedings of the 12th Australian Joint Conference on Artificial Intelligence: Advanced Topics in Artificial Intelligence
Generalized Graph Colorability and Compressibility of Boolean Formulae
ISAAC '98 Proceedings of the 9th International Symposium on Algorithms and Computation
Sharper Bounds for the Hardness of Prototype and Feature Selection
ALT '00 Proceedings of the 11th International Conference on Algorithmic Learning Theory
Self-improved gaps almost everywhere for the agnostic approximation of monomials
Theoretical Computer Science
Hi-index | 0.00 |
Bshouty, Goldman, Hancock and Matar have shown that up to logn -term DNF formulas can be properly learned in the exact model with equivalence and membership queries. Given standard complexity-theoretical assumptions, we show that this positive result for proper learning cannot be significantly improved in the exact model or the PAC model extended to allow membership queries. Our negative results are derived from two general techniques for proving such results in the exact model and the extended PAC model. As a further application of these techniques, we consider read-thrice DNF formulas. Here we improve on Aizenstein, Hellerstein, and Pitt's negative result for proper learning in the exact model in two ways. First, we show that their assumption of NP ≠ co-NP can be replaced with the weaker assumption of P ≠ NP. Second, we show that read-thrice DNF formulas are not properly learnable in the extended PAC model, assuming RP ≠ NP.