On the learnability of Boolean formulae
STOC '87 Proceedings of the nineteenth annual ACM symposium on Theory of computing
On the limits of proper learnability of subclasses of DNF formulas
COLT '94 Proceedings of the seventh annual conference on Computational learning theory
Toward Efficient Agnostic Learning
Machine Learning - Special issue on computational learning theory, COLT'92
An introduction to computational learning theory
An introduction to computational learning theory
Robust trainability of single neurons
Journal of Computer and System Sciences
Lower bounds on learning decision lists and trees
Information and Computation
A threshold of ln n for approximating set cover (preliminary version)
STOC '96 Proceedings of the twenty-eighth annual ACM symposium on Theory of computing
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Improved Boosting Algorithms Using Confidence-rated Predictions
Machine Learning - The Eleventh Annual Conference on computational Learning Theory
Computers and Intractability: A Guide to the Theory of NP-Completeness
Computers and Intractability: A Guide to the Theory of NP-Completeness
Function-Free Horn Clauses Are Hard to Approximate
ILP '98 Proceedings of the 8th International Workshop on Inductive Logic Programming
On the Difficulty of Approximately Maximizing Agreements
COLT '00 Proceedings of the Thirteenth Annual Conference on Computational Learning Theory
Complexity in the case against accuracy estimation
Theoretical Computer Science
Linear degree extractors and the inapproximability of max clique and chromatic number
Proceedings of the thirty-eighth annual ACM symposium on Theory of computing
Maximizing agreements and coagnostic learning
Theoretical Computer Science - Algorithmic learning theory(ALT 2002)
Optimal Hardness Results for Maximizing Agreements with Monomials
CCC '06 Proceedings of the 21st Annual IEEE Conference on Computational Complexity
Hi-index | 5.23 |
Given a learning sample, we focus on the hardness of finding monomials having low error, inside the interval bounded below by the smallest error achieved by a monomial (the best rule), and bounded above by the error of the default class (the poorest rule). It is well-known that when its lower bound is zero, it is an easy task to find, in linear time, a monomial with zero error. What we prove is that when this bound is not zero, regardless of the location of the default class in (0,1/2), it becomes a huge complexity burden to beat significantly the default class. In fact, under some complexity-theoretical assumptions, it may already be hard to beat the trivial approximation ratios, even when relaxing the time complexity constraint to be quasi-polynomial or sub-exponential. Our results also hold with uniform weights over the examples.