Mistake bounds and logarithmic linear-threshold learning algorithms
Mistake bounds and logarithmic linear-threshold learning algorithms
Tracking drifting concepts using random examples
COLT '91 Proceedings of the fourth annual workshop on Computational learning theory
Redundant noisy attributes, attribute errors, and linear-threshold learning using winnow
COLT '91 Proceedings of the fourth annual workshop on Computational learning theory
Learning in the presence of finitely or infinitely many irrelevant attributes
COLT '91 Proceedings of the fourth annual workshop on Computational learning theory
Learning time-varying concepts
NIPS-3 Proceedings of the 1990 conference on Advances in neural information processing systems 3
The weighted majority algorithm
Information and Computation
Journal of the ACM (JACM)
Machine Learning - Special issue on context sensitivity and concept drift
Machine Learning - Special issue on context sensitivity and concept drift
Linear hinge loss and average margin
Proceedings of the 1998 conference on Advances in neural information processing systems II
On-line Learning and the Metrical Task System Problem
Machine Learning
Adaptive disk spin—down for mobile computers
Mobile Networks and Applications
General Convergence Results for Linear Discriminant Updates
Machine Learning
Large Margin Classification for Moving Targets
ALT '02 Proceedings of the 13th International Conference on Algorithmic Learning Theory
Tracking the best linear predictor
The Journal of Machine Learning Research
A new approximate maximal margin classification algorithm
The Journal of Machine Learning Research
Projective DNF formulae and their revision
Discrete Applied Mathematics
Dynamic Weighted Majority: An Ensemble Method for Drifting Concepts
The Journal of Machine Learning Research
On-line learning with delayed label feedback
ALT'05 Proceedings of the 16th international conference on Algorithmic Learning Theory
Hi-index | 0.00 |
In this paper, we give a mistake-bound for learning arbitrary linear-threshold concepts that are allowed to change over time in the on-line model of learning. We use a variation of the Winnow algorithm and show that the bounds for learning shifting linear-threshold functions have many of the same advantages that the traditional Winnow algorithm has on fixed concepts. These benefits include a weak dependence on the number of irrelevant attributes, inexpensive runtime, and robust behavior against noise. In fact, we show that the bound for tracking Winnow has even better performance with respect to irrelevant attributes. Let X∈[0,1]n be an instance of the learning problem. In the previous bounds, the number of mistakes depends on lnn. In this paper, the shifting concept bound depends on max ln(||X||1). We show that this behavior is a result of certain parameter choices in the tracking version of Winnow, and we show how to use related parameters to get a similar mistake bound for the traditional fixed concept version of Winnow.