Learning of depth two neural networks with constant fan-in at the hidden nodes (extended abstract)
COLT '96 Proceedings of the ninth annual conference on Computational learning theory
Agnostic learning of geometric patterns (extended abstract)
COLT '97 Proceedings of the tenth annual conference on Computational learning theory
The complexity of learning according to two models of a drifting environment
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Machine Learning - Special issue on context sensitivity and concept drift
On PAC learning using Winnow, Perceptron, and a Perceptron-like algorithm
COLT '99 Proceedings of the twelfth annual conference on Computational learning theory
The Complexity of Learning According to Two Models of a Drifting Environment
Machine Learning - The Eleventh Annual Conference on computational Learning Theory
PAC Analogues of Perceptron and Winnow Via Boosting the Margin
Machine Learning
On learning unions of pattern languages and tree patterns in the mistake bound model
Theoretical Computer Science
On Learning Unions of Pattern Languages and Tree Patterns
ALT '99 Proceedings of the 10th International Conference on Algorithmic Learning Theory
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 3
On-line learning with delayed label feedback
ALT'05 Proceedings of the 16th international conference on Algorithmic Learning Theory
Hi-index | 0.00 |
N. Littlestone developed a simple deterministic on-line learning algorithm for learning k-literal disjunctions. This algorithm (called Winnow) keeps one weight for each of the n variables and does multiplicative updates to its weights. We develop a randomized version of Winnow and prove bounds for an adaptation of the algorithm for the case when the disjunction may change over time. In this case a possible target disjunction schedule T is a sequence of disjunctions (one per trial) and the shift size is the total number of literals that are added/removed from the disjunctions as one progresses through the sequence. We develop an algorithm that predicts nearly as well as the best disjunction schedule for an arbitrary sequence of examples. This algorithm that allows us to track the predictions of the best disjunction is hardly more complex than the original version. However the amortized analysis needed for obtaining worst-case mistake bounds requires new techniques. In some cases our lower bounds show that the upper bounds of our algorithm have the right constant in front of the leading term in the mistake bound and almost the right constant in front of the second leading term. By combining the tracking capability with existing applications of Winnow we are able to enhance these applications to the shifting case as well.