Learning probabilistically consistent linear threshold functions
COLT '97 Proceedings of the tenth annual conference on Computational learning theory
Identification criteria and lower bounds for perceptron-like learning rules
Neural Computation
Efficient noise-tolerant learning from statistical queries
Journal of the ACM (JACM)
STOC '99 Proceedings of the thirty-first annual ACM symposium on Theory of computing
Learning fixed-dimension linear thresholds from fragmented data
COLT '99 Proceedings of the twelfth annual conference on Computational learning theory
On PAC learning using Winnow, Perceptron, and a Perceptron-like algorithm
COLT '99 Proceedings of the twelfth annual conference on Computational learning theory
A neuroidal architecture for cognitive computation
Journal of the ACM (JACM)
STOC '01 Proceedings of the thirty-third annual ACM symposium on Theory of computing
Optimal outlier removal in high-dimensional
STOC '01 Proceedings of the thirty-third annual ACM symposium on Theory of computing
Learning fixed-dimension linear thresholds from fragmented data
Information and Computation
PAC Analogues of Perceptron and Winnow Via Boosting the Margin
Machine Learning
On Learning Correlated Boolean Functions Using Statistical Queries
ALT '01 Proceedings of the 12th International Conference on Algorithmic Learning Theory
An Algorithmic Theory of Learning: Robust Concepts and Random Projection
FOCS '99 Proceedings of the 40th Annual Symposium on Foundations of Computer Science
Journal of Computer and System Sciences - STOC 2001
Predicting customer shopping lists from point-of-sale purchase data
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
ICML '06 Proceedings of the 23rd international conference on Machine learning
On hardness of learning intersection of two halfspaces
STOC '08 Proceedings of the fortieth annual ACM symposium on Theory of computing
Learning Kernel Perceptrons on Noisy Data Using Random Projections
ALT '07 Proceedings of the 18th international conference on Algorithmic Learning Theory
From annotator agreement to noise models
Computational Linguistics
Learning with annotation noise
ACL '09 Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1 - Volume 1
A team of continuous-action learning automata for noise-tolerant learning of half-spaces
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Analysis of perceptron-based active learning
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Hi-index | 0.00 |
The authors consider the problem of learning a linear threshold function (a halfspace in n dimensions, also called a "perceptron"). Methods for solving this problem generally fall into two categories. In the absence of noise, this problem can be formulated as a linear program and solved in polynomial time with the ellipsoid algorithm (or interior point methods). On the other hand, simple greedy algorithms such as the perceptron algorithm seem to work well in practice and can be made noise tolerant; but, their running time depends on a separation parameter (which quantifies the amount of "wiggle room" available) and can be exponential in the description length of the input. They show how simple greedy methods can be used to find weak hypotheses (hypotheses that classify noticeably more than half of the examples) in polynomial time, without dependence on any separation parameter. This results in a polynomial-time algorithm for learning linear threshold functions in the PAC model in the presence of random classification noise. The algorithm is based on a new method for removing outliers in data. Specifically, for any set S of points in R/sup n/, each given to b bits of precision, they show that one can remove only a small fraction of S so that in the remaining set T, for every vector v, max/sub x/spl epsiv/T/(v/spl middot/x)/sup 2//spl les/poly(n,b)|T|/sup -1//spl Sigma//sub x/spl epsiv/T/(v/spl middot/x)/sup 2/. After removing these outliers, they are able to show that a modified version of the perceptron learning algorithm works in polynomial time, even in the presence of random classification noise.