Communications of the ACM
On the learnability of Boolean formulae
STOC '87 Proceedings of the nineteenth annual ACM symposium on Theory of computing
Computational limitations on learning from examples
Journal of the ACM (JACM)
Learning DNF under the uniform distribution in quasi-polynomial time
COLT '90 Proceedings of the third annual workshop on Computational learning theory
Efficient probabilistically checkable proofs and applications to approximations
STOC '93 Proceedings of the twenty-fifth annual ACM symposium on Theory of computing
Journal of Computer and System Sciences
When won't membership queries help?
Selected papers of the 23rd annual ACM symposium on Theory of computing
Fast learning of k-term DNF formulas with queries
Journal of Computer and System Sciences - Special issue on selected papers presented at the 24th annual ACM symposium on the theory of computing (STOC '92)
Information Processing Letters
STOC '97 Proceedings of the twenty-ninth annual ACM symposium on Theory of computing
An efficient membership-query algorithm for learning DNF with respect to the uniform distribution
Journal of Computer and System Sciences
A threshold of ln n for approximating set cover
Journal of the ACM (JACM)
On the hardness of approximating minimization problems
Journal of the ACM (JACM)
Non-approximability results for optimization problems on bounded degree instances
STOC '01 Proceedings of the thirty-third annual ACM symposium on Theory of computing
Exact learning of DNF formulas using DNF hypotheses
STOC '02 Proceedings of the thiry-fourth annual ACM symposium on Theory of computing
Logic Synthesis and Verification
Routing Table Compaction in Ternary CAM
IEEE Micro
Zero Knowledge and the Chromatic Number
CCC '96 Proceedings of the 11th Annual IEEE Conference on Computational Complexity
Hardness of Approximating Minimization Problems
FOCS '99 Proceedings of the 40th Annual Symposium on Foundations of Computer Science
Journal of Computer and System Sciences - STOC 2001
A New Multilayered PCP and the Hardness of Hypergraph Vertex Cover
SIAM Journal on Computing
Automatic Subspace Clustering of High Dimensional Data
Data Mining and Knowledge Discovery
Minimizing DNF Formulas and AC^0_d Circuits Given a Truth Table
CCC '06 Proceedings of the 21st Annual IEEE Conference on Computational Complexity
The complexity of properly learning simple concept classes
Journal of Computer and System Sciences
Nonrandom binary superimposed codes
IEEE Transactions on Information Theory
Complexity of two-level logic minimization
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
The pervasive reach of resource-bounded Kolmogorov complexity in computational complexity theory
Journal of Computer and System Sciences
Discrete Applied Mathematics
PCPs and the hardness of generating private synthetic data
TCC'11 Proceedings of the 8th conference on Theory of cryptography
Proceedings of the 15th International Conference on Database Theory
ACM Transactions on Database Systems (TODS) - Invited papers issue
Hi-index | 0.00 |
Producing a small DNF expression consistent with given data is a classical problem in computer science that occurs in a number of forms and has numerous applications. We consider two standard variants of this problem. The first one is two-level logic minimization or finding a minimum DNF formula consistent with a given complete truth table (TT-MinDNF). This problem was formulated by Quine in 1952 and has been since one of the key problems in logic design. It was proved NP-complete by Masek in 1979. The best known polynomial approximation algorithm is based on a reduction to the SET-COVER problem and produces a DNF formula of size O(d@?OPT), where d is the number of variables. We prove that TT-MinDNF is NP-hard to approximate within d^@c for some constant @c0, establishing the first inapproximability result for the problem. The other DNF minimization problem we consider is PAC learning of DNF expressions when the learning algorithm must output a DNF expression as its hypothesis (referred to as proper learning). We prove that DNF expressions are NP-hard to PAC learn properly even when the learner has access to membership queries, thereby answering a long-standing open question due to Valiant [L.G. Valiant, A theory of the learnable, Comm. ACM 27 (11) (1984) 1134-1142]. Finally, we provide a concrete connection between these variants of DNF minimization problem. Specifically, we prove that inapproximability of TT-MinDNF implies hardness results for restricted proper learning of DNF expressions with membership queries even when learning with respect to the uniform distribution only.