Using the doubling dimension to analyze the generalization of learning algorithms
Journal of Computer and System Sciences
Aggregation by exponential weighting and sharp oracle inequalities
COLT'07 Proceedings of the 20th annual conference on Learning theory
COLT'07 Proceedings of the 20th annual conference on Learning theory
A PAC-bayes bound for tailored density estimation
ALT'10 Proceedings of the 21st international conference on Algorithmic learning theory
Predicting panel data binary choice with the gibbs posterior
Neural Computation
A randomized online learning algorithm for better variance control
COLT'06 Proceedings of the 19th annual conference on Learning Theory
Differentially-private learning and information theory
Proceedings of the 2012 Joint EDBT/ICDT Workshops
The safe bayesian: learning the learning rate via the mixability gap
ALT'12 Proceedings of the 23rd international conference on Algorithmic Learning Theory
Hi-index | 754.84 |
In this paper, we establish upper and lower bounds for some statistical estimation problems through concise information-theoretic arguments. Our upper bound analysis is based on a simple yet general inequality which we call the information exponential inequality. We show that this inequality naturally leads to a general randomized estimation method, for which performance upper bounds can be obtained. The lower bounds, applicable for all statistical estimators, are obtained by original applications of some well known information-theoretic inequalities, and approximately match the obtained upper bounds for various important problems. Moreover, our framework can be regarded as a natural generalization of the standard minimax framework, in that we allow the performance of the estimator to vary for different possible underlying distributions according to a predefined prior