Proceedings of the thirty-ninth annual ACM symposium on Theory of computing
Approximating maximum satisfiable subsystems of linear equations of bounded width
Information Processing Letters
On hardness of learning intersection of two halfspaces
STOC '08 Proceedings of the fortieth annual ACM symposium on Theory of computing
STOC '08 Proceedings of the fortieth annual ACM symposium on Theory of computing
Learning Halfspaces with Malicious Noise
ICALP '09 Proceedings of the 36th International Colloquium on Automata, Languages and Programming: Part I
Hardness of Solving Sparse Overdetermined Linear Systems: A 3-Query PCP over Integers
ACM Transactions on Computation Theory (TOCT)
Learning with annotation noise
ACL '09 Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1 - Volume 1
Learning Halfspaces with Malicious Noise
The Journal of Machine Learning Research
ZDVUE: prioritization of javascript attacks to discover new vulnerabilities
Proceedings of the 4th ACM workshop on Security and artificial intelligence
SIAM Journal on Computing
Learning linear and kernel predictors with the 0-1 loss function
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Three
Learning Kernel-Based Halfspaces with the 0-1 Loss
SIAM Journal on Computing
Clustering in the boolean hypercube in a list decoding regime
ICALP'13 Proceedings of the 40th international conference on Automata, Languages, and Programming - Volume Part I
Hi-index | 0.00 |
Learning an unknown halfspace (also called a perceptron) from labeled examples is one of the classic problems in machine learning. In the noise-free case, when a halfspace consistent with all the training examples exists, the problem can be solved in polynomial time using linear programming. However, under the promise that a halfspace consistent with a fraction (1-\varepsilon) of the examples exists (for some small constant \varepsilon 0), it was not known how to efficiently find a halfspace that is correct on even 51% of the examples. Nor was a hardness result that ruled out getting agreement on more than 99.9% of the examples known. In this work, we close this gap in our understanding, and prove that even a tiny amount of worst-case noise makes the problem of learning halfspaces intractable in a strong sense. Specifically, for arbitrary \varepsilon, \delta \ge 0, we prove that given a set of examples-label pairs from the hypercube a fraction (1-\varepsilon ) of which can be explained by a halfspace, it is NP-hard to find a halfspace that correctly labels a fraction (1/2 + \delta ) of the examples. The hardness result is tight since it is trivial to get agreement on 1/2 the examples. In learning theory parlance, we prove that weak proper agnostic learning of halfspaces is hard. This settles a question that was raised by Blum et al in their work on learning halfspaces in the presence of random classification noise [7], and in some more recent works as well. Along the way, we also obtain a strong hardness for another basic computational problem: solving a linear system over the rationals.