COLT '90 Proceedings of the third annual workshop on Computational learning theory
Predicting a binary sequence almost as well as the optimal biased coin
COLT '96 Proceedings of the ninth annual conference on Computational learning theory
Minimax relative loss analysis for sequential prediction algorithms using parametric hypotheses
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Competitive on-line linear regression
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Minimax regret under log loss for general classes of experts
COLT '99 Proceedings of the twelfth annual conference on Computational learning theory
Stochastic Complexity in Statistical Inquiry Theory
Stochastic Complexity in Statistical Inquiry Theory
Tight worst-case loss bounds for predicting with expert advice
EuroCOLT '95 Proceedings of the Second European Conference on Computational Learning Theory
EXPONENTIATED GRADIENT VERSUS GRADIENT DESCENT FOR LINEAR PREDICTORS
EXPONENTIATED GRADIENT VERSUS GRADIENT DESCENT FOR LINEAR PREDICTORS
Fisher information and stochastic complexity
IEEE Transactions on Information Theory
A decision-theoretic extension of stochastic complexity and its applications to learning
IEEE Transactions on Information Theory
The Last-Step Minimax Algorithm
ALT '00 Proceedings of the 11th International Conference on Algorithmic Learning Theory
On-Line Estimation of Hidden Markov Model Parameters
DS '00 Proceedings of the Third International Conference on Discovery Science
Hi-index | 0.00 |
We are concerned with the problem of sequential prediction using a given hypothesis class of continuously-many prediction strategies. An effective performance measure is the minimax relative cumulative loss (RCL), which is the minimum of the worst-case difference between the cumulative loss for any prediction algorithm and that for the best assignment in a given hypothesis class. The purpose of this paper is to evaluate the minimax RCL for general continuous hypothesis classes under general losses. We first derive asymptotical upper and lower bounds on the minimax RCL to show that they match (k/2c) ln m within error of o(ln m) where k is the dimension of parameters for the hypothesis class, m is the sample size, and c is the constant depending on the loss function. We thereby show that the cumulative loss attaining the minimax RCL asymptotically coincides with the extended stochastic complexity (ESC), which is an extension of Rissanen's stochastic complexity (SC) into the decision-theoretic scenario. We further derive non-asymptotical upper bounds on the minimax RCL both for parametric and nonparametric hypothesis classes. We apply the analysis into the regression problem to derive the least worst-case cumulative loss bounds to date.