Communications of the ACM
Information Processing Letters
Computational limitations on learning from examples
Journal of the ACM (JACM)
Training a 3-node neural network is NP-complete
COLT '88 Proceedings of the first annual workshop on Computational learning theory
Theoretical Computer Science - Special issue on structure in complexity theory
Learning in the presence of malicious errors
SIAM Journal on Computing
Weakly learning DNF and characterizing statistical query learning using Fourier analysis
STOC '94 Proceedings of the twenty-sixth annual ACM symposium on Theory of computing
Robust trainability of single neurons
Journal of Computer and System Sciences
The complexity and approximability of finding maximum feasible subsystems of linear relations
Theoretical Computer Science
Some optimal inapproximability results
STOC '97 Proceedings of the twenty-ninth annual ACM symposium on Theory of computing
Machine Learning
Maximizing Agreements and CoAgnostic Learning
ALT '02 Proceedings of the 13th International Conference on Algorithmic Learning Theory
Hardness Results for General Two-Layer Neural Networks
COLT '00 Proceedings of the Thirteenth Annual Conference on Computational Learning Theory
Bounds for the Minimum Disagreement Problem with Applications to Learning Theory
COLT '02 Proceedings of the 15th Annual Conference on Computational Learning Theory
On the difficulty of approximately maximizing agreements
Journal of Computer and System Sciences
Clique is hard to approximate within n1-
FOCS '96 Proceedings of the 37th Annual Symposium on Foundations of Computer Science
Hi-index | 0.00 |
We study heuristic learnability of classes of Boolean formulas, a model proposed by Pitt and Valiant. In this type of example-based learning of a concept class C by a hypothesis class H, the learner seeks a hypothesis h驴 H that agrees with all of the negative (resp. positive) examples, and a maximum number of positive (resp. negative) examples. This learning is equivalent to the problem of maximizing agreement with a training sample, with the constraint that the misclassifications be limited to examples with positive (resp. negative) labels. Several recent papers have studied the more general problem of maximizing agreements without this one-sided error constraint. We show that for many classes (though not all), the maximum agreement problem with one-sided error is more difficult than the general maximum agreement problem. We then provide lower bounds on the approximability of these one-sided error problems, for many concept classes, including Halfspaces, Decision Lists, XOR, k-term DNF, and neural nets.