Estimating a probability using finite memory
IEEE Transactions on Information Theory
Adaptive filter theory
Elements of information theory
Elements of information theory
STOC '93 Proceedings of the twenty-fifth annual ACM symposium on Theory of computing
Worst-case quadratic loss bounds for a generalization of the Widrow-Hoff rule
COLT '93 Proceedings of the sixth annual conference on Computational learning theory
Optimal prediction for prefetching in the worst case
SODA '94 Proceedings of the fifth annual ACM-SIAM symposium on Discrete algorithms
Algorithms for portfolio management based on the Newton method
ICML '06 Proceedings of the 23rd international conference on Machine learning
Logarithmic regret algorithms for online convex optimization
Machine Learning
Hi-index | 0.00 |
Sequential learning and decision algorithms are investigated, with various application areas, under a family of additive loss functions for individual data sequences. Simple universal sequential schemes are known, under certain conditions, to approach optimality uniformly as fast as n-1logn, where n is the sample size. For the case of finite-alphabet observations, the class of schemes that can be implemented by finite-state machines (FSM's), is studied. It is shown that Markovian machines with sufficiently long memory exist that are asymptotically nearly as good as any given FSM (deterministic or randomized) for the purpose of sequential decision. For the continuous-valued observation case, a useful class of parametric schemes is discussed with special attention to the recursive least squares (RLS) algorithm.