Mistake bounds and logarithmic linear-threshold learning algorithms
Mistake bounds and logarithmic linear-threshold learning algorithms
Redundant noisy attributes, attribute errors, and linear-threshold learning using winnow
COLT '91 Proceedings of the fourth annual workshop on Computational learning theory
Learning in the presence of finitely or infinitely many irrelevant attributes
COLT '91 Proceedings of the fourth annual workshop on Computational learning theory
Learning time-varying concepts
NIPS-3 Proceedings of the 1990 conference on Advances in neural information processing systems 3
The weighted majority algorithm
Information and Computation
General convergence results for linear discriminant updates
COLT '97 Proceedings of the tenth annual conference on Computational learning theory
Machine Learning - Special issue on context sensitivity and concept drift
Machine Learning - Special issue on context sensitivity and concept drift
On-line Learning and the Metrical Task System Problem
Machine Learning
Adaptive disk spin—down for mobile computers
Mobile Networks and Applications
Tracking the best linear predictor
The Journal of Machine Learning Research
Large Margin Classification for Moving Targets
ALT '02 Proceedings of the 13th International Conference on Algorithmic Learning Theory
Online learning of linear classifiers
Advanced lectures on machine learning
Hi-index | 0.00 |
In this paper, we give a mistake-bound for learning arbitrary linear-threshold concepts that are allowed to change over time in the on-line model of learning. We use a standard variation of the Winnow algorithm and show that the bounds for learning shifting linear-threshold functions have many of the same advantages that the traditional Winnow algorithm has on fixed concepts. These benefits include a weak dependence on the number of irrelevant attributes, inexpensive runtime, and robust behavior against noise. In fact, we show that the bound for the tracking version of Winnow has even better performance with respect to irrelevant attributes. Let X 驴 [0, 1]n be an instance of the learning problem. In the traditional algorithm, the bound depends on ln n. In this paper, the shifting concept bound depends approximately on max ln (||X||1).