A simple polynomial-time rescaling algorithm for solving linear programs
STOC '04 Proceedings of the thirty-sixth annual ACM symposium on Theory of computing
Some Dichotomy Theorems for Neural Learning Problems
The Journal of Machine Learning Research
Toward Attribute Efficient Learning of Decision Lists and Parities
The Journal of Machine Learning Research
On exact learning halfspaces with random consistent hypothesis oracle
ALT'06 Proceedings of the 17th international conference on Algorithmic Learning Theory
Hi-index | 0.00 |
We analyze the performance of the widely studied Perceptron and Winnow algorithms for learning linear threshold functions under Valiant's probably approximately correct (PAC) model of concept learning. We show that under the uniform distribution on boolean examples, the Perceptron algorithm can efficiently PAC learn nested functions (a class of linear threshold functions known to be hard for Perceptron under arbitrary distributions) but cannot efficiently PAC learn arbitrary linear threshold functions. We also prove that Littlestone's Winnow algorithm is not an efficient PAC learning algorithm for the class of positive linear threshold functions, thus answering an open question posed by Schmitt [ Neural Comput., 10 (1998), pp. 235--250]. Based on our results we conjecture that no "local" algorithm can learn linear threshold functions efficiently.