Computational limitations on learning from examples
Journal of the ACM (JACM)
Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
Neural network design and the complexity of learning
Neural network design and the complexity of learning
The Strength of Weak Learnability
Machine Learning
Learning in the presence of malicious errors
SIAM Journal on Computing
Decision theoretic generalizations of the PAC model for neural net and other learning applications
Information and Computation
Cryptographic hardness of distribution-specific learning
STOC '93 Proceedings of the twenty-fifth annual ACM symposium on Theory of computing
Cryptographic limitations on learning Boolean formulae and finite automata
Journal of the ACM (JACM)
Tracking Drifting Concepts By Minimizing Disagreements
Machine Learning - Special issue on computational learning theory
Toward Efficient Agnostic Learning
Machine Learning - Special issue on computational learning theory, COLT'92
Robust trainability of single neurons
Journal of Computer and System Sciences
The complexity and approximability of finding maximum feasible subsystems of linear relations
Theoretical Computer Science
Journal of Computer and System Sciences
A threshold of ln n for approximating set cover (preliminary version)
STOC '96 Proceedings of the twenty-eighth annual ACM symposium on Theory of computing
The hardness of approximate optima in lattices, codes, and systems of linear equations
Journal of Computer and System Sciences - Special issue: papers from the 32nd and 34th annual symposia on foundations of computer science, Oct. 2–4, 1991 and Nov. 3–5, 1993
Some optimal inapproximability results
STOC '97 Proceedings of the twenty-ninth annual ACM symposium on Theory of computing
Machine Learning
A decision-theoretic generalization of on-line learning and an application to boosting
EuroCOLT '95 Proceedings of the Second European Conference on Computational Learning Theory
Hardness Results for General Two-Layer Neural Networks
COLT '00 Proceedings of the Thirteenth Annual Conference on Computational Learning Theory
On the complexity of training neural networks with continuous activation functions
IEEE Transactions on Neural Networks
Stability Properties of Empirical Risk Minimization over Donsker Classes
The Journal of Machine Learning Research
Applications of regularized least squares to pattern classification
Theoretical Computer Science
An experimental evaluation of simplicity in rule learning
Artificial Intelligence
Margin-based first-order rule learning
Machine Learning
Alternative measures of computational complexity with applications to agnostic learning
TAMC'06 Proceedings of the Third international conference on Theory and Applications of Models of Computation
Hardness results for agnostically learning low-degree polynomial threshold functions
Proceedings of the twenty-second annual ACM-SIAM symposium on Discrete Algorithms
Hi-index | 0.00 |
We address the computational complexity of learning in the agnostic framework. For a variety of common concept classes we prove that, unless P = NP, there is no polynomial time approximation scheme for finding a member in the class that approximately maximizes the agreement with a given training sample. In particular our results apply to the classes of monomials, axis-aligned hyper-rectangles, closed balls and monotone monomials. For each of these classes, we prove the NP-hardness of approximating maximal agreement to within some fixed constant (independent of the sample size and of the dimensionality of the sample space). For the class of half-spaces, we prove that, for any ε 0, it is NP-hard to approximately maximize agreements to within a factor of (418/415 - ε), improving on the best previously known constant for this problem, and using a simpler proof. An interesting feature of our proofs is that, for each of the classes we discuss, we find patterns of training examples that, while being hard for approximating agreement within that concept class, allow efficient agreement maximization within other concept classes. These results bring up a new aspect of the model selection problem--they imply that the choice of hypothesis class for agnostic learning from among those considered in this paper can drastically effect the computational complexity of the learning process.