Predicting a binary sequence almost as well as the optimal biased coin
Information and Computation
ICML '06 Proceedings of the 23rd international conference on Machine learning
ALT '08 Proceedings of the 19th international conference on Algorithmic Learning Theory
Coding on countably infinite alphabets
IEEE Transactions on Information Theory
On Finding Predictors for Arbitrary Families of Processes
The Journal of Machine Learning Research
Vagueness is rational under uncertainty
Proceedings of the 17th Amsterdam colloquium conference on Logic, language and meaning
Maximum entropy and the glasses you are looking through
UAI'00 Proceedings of the Sixteenth conference on Uncertainty in artificial intelligence
Iterative Markov chain Monte Carlo computation of reference priors and minimax risk
UAI'01 Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence
International Journal of Approximate Reasoning
Hi-index | 754.90 |
Suppose nature picks a probability measure Pθ on a complete separable metric space X at random from a measurable set P Θ={Pθ:θ∈Θ}. Then, without knowing θ, a statistician picks a measure Q on S. Finally, the statistician suffers a loss D(P0||Q), the relative entropy between Pθ and Q. We show that the minimax and maximin values of this game are always equal, and there is always a minimax strategy in the closure of the set of all Bayes strategies. This generalizes previous results of Gallager(1979), and Davisson and Leon-Garcia (1980)