Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
Mistake bounds and logarithmic linear-threshold learning algorithms
Mistake bounds and logarithmic linear-threshold learning algorithms
The Strength of Weak Learnability
Machine Learning
The perceptron algorithm is fast for nonmalicious distributions
Neural Computation
From on-line to batch learning
COLT '89 Proceedings of the second annual workshop on Computational learning theory
Redundant noisy attributes, attribute errors, and linear-threshold learning using winnow
COLT '91 Proceedings of the fourth annual workshop on Computational learning theory
An improved boosting algorithm and its implications on learning complexity
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Cryptographic limitations on learning Boolean formulae and finite automata
Journal of the ACM (JACM)
Halfspace learning, linear programming, and nonmalicious distributions
Information Processing Letters
How fast can a threshold gate learn?
Proceedings of a workshop on Computational learning theory and natural learning systems (vol. 1) : constraints and prospects: constraints and prospects
Boosting a weak learning algorithm by majority
Information and Computation
On the boosting ability of top-down decision tree learning algorithms
STOC '96 Proceedings of the twenty-eighth annual ACM symposium on Theory of computing
Game theory, on-line prediction and boosting
COLT '96 Proceedings of the ninth annual conference on Computational learning theory
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
General convergence results for linear discriminant updates
COLT '97 Proceedings of the tenth annual conference on Computational learning theory
Artificial Intelligence - Special issue on relevance
Improved boosting algorithms using confidence-rated predictions
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Identification criteria and lower bounds for perceptron-like learning rules
Neural Computation
Worst-case analysis of the perceptron and exponentiated update algorithms
Artificial Intelligence
Generalization performance of support vector machines and other pattern classifiers
Advances in kernel methods
The robustness of the p-norm algorithms
COLT '99 Proceedings of the twelfth annual conference on Computational learning theory
COLT '99 Proceedings of the twelfth annual conference on Computational learning theory
On PAC learning using Winnow, Perceptron, and a Perceptron-like algorithm
COLT '99 Proceedings of the twelfth annual conference on Computational learning theory
Large Margin Classification Using the Perceptron Algorithm
Machine Learning - The Eleventh Annual Conference on computational Learning Theory
Machine Learning
Machine Learning
FOCS '95 Proceedings of the 36th Annual Symposium on Foundations of Computer Science
Learning noisy perceptrons by a perceptron in polynomial time
FOCS '97 Proceedings of the 38th Annual Symposium on Foundations of Computer Science
A polynomial-time algorithm for learning noisy linear threshold functions
FOCS '96 Proceedings of the 37th Annual Symposium on Foundations of Computer Science
Combinations of weak classifiers
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Learning Halfspaces with Malicious Noise
The Journal of Machine Learning Research
Hi-index | 0.00 |
We describe a novel family of PAC model algorithms for learning linear threshold functions. The new algorithms work by boosting a simple weak learner and exhibit sample complexity bounds remarkably similar to those of known online algorithms such as Perceptron and Winnow, thus suggesting that these well-studied online algorithms in some sense correspond to instances of boosting. We show that the new algorithms can be viewed as natural PAC analogues of the online p-norm algorithms which have recently been studied by Grove, Littlestone, and Schuurmans (1997, Proceedings of the Tenth Annual Conference on Computational Learning Theory (pp. 171–183) and Gentile and Littlestone (1999, Proceedings of the Twelfth Annual Conference on Computational Learning Theory (pp. 1–11). As special cases of the algorithm, by taking p = 2 and p = ∞ we obtain natural boosting-based PAC analogues of Perceptron and Winnow respectively. The p = ∞ case of our algorithm can also be viewed as a generalization (with an improved sample complexity bound) of Jackson and Craven's PAC-model boosting-based algorithm for learning “sparse perceptrons” (Jackson & Craven, 1996, Advances in neural information processing systems 8, MIT Press). The analysis of the generalization error of the new algorithms relies on techniques from the theory of large margin classification.