Communications of the ACM
Information Processing Letters
Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
Computational learning theory: an introduction
Computational learning theory: an introduction
Toward efficient agnostic learning
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Learning in the presence of malicious errors
SIAM Journal on Computing
Decision theoretic generalizations of the PAC model for neural net and other learning applications
Information and Computation
Robust trainability of single neurons
Journal of Computer and System Sciences
The complexity and approximability of finding maximum feasible subsystems of linear relations
Theoretical Computer Science
STOC '95 Proceedings of the twenty-seventh annual ACM symposium on Theory of computing
Some optimal inapproximability results
STOC '97 Proceedings of the twenty-ninth annual ACM symposium on Theory of computing
Computers and Intractability: A Guide to the Theory of NP-Completeness
Computers and Intractability: A Guide to the Theory of NP-Completeness
Machine Learning
On the Difficulty of Approximately Maximizing Agreements
COLT '00 Proceedings of the Thirteenth Annual Conference on Computational Learning Theory
Bounds for the Minimum Disagreement Problem with Applications to Learning Theory
COLT '02 Proceedings of the 15th Annual Conference on Computational Learning Theory
Self-improved gaps almost everywhere for the agnostic approximation of monomials
Theoretical Computer Science
Hardness results for agnostically learning low-degree polynomial threshold functions
Proceedings of the twenty-second annual ACM-SIAM symposium on Discrete Algorithms
An Improved Branch-and-Bound Method for Maximum Monomial Agreement
INFORMS Journal on Computing
Hi-index | 0.00 |
This paper studies α-coagnostic learnability of classes of Boolean formulas. To α-coagnostic learn C from H, the learner seeks a hypothesis h ∈ H whose probability of agreement (rather than disagreement as in agnostic learning) with a labeled example is within a factor α of the best agreement probability achieved by any f ∈ C. Although 1-coagnostic learning is equivalent to agnostic learning, this is not true for α-coagnostic learning for 1/2 S and must find an h ∈ H that agrees with as many examples in S as the best f ∈ C does. Many studies have been done on maximum agreement problems, for classes such as monomials, monotone monomials, antimonotone monomials, halfspaces and balls. We further the study of these problems and some extensions of them. For the above classes we improve the best previously known factors α for the hardness of α-coagnostic learning. We also find the first constant lower bounds for decision lists, exclusive-or, halfsaces (over the Boolean domain), 2-term DNF and 2-term multivariate polynomials.