A guided tour of Chernoff bounds
Information Processing Letters
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Balls and bins: a study in negative dependence
Random Structures & Algorithms
Microchoice bounds and self bounding learning algorithms
COLT '99 Proceedings of the twelfth annual conference on Computational learning theory
A Fast, Bottom-Up Decision Tree Pruning Algorithm with Near-Optimal Generalization
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
On the Convergence Rate of Good-Turing Estimators
COLT '00 Proceedings of the Thirteenth Annual Conference on Computational Learning Theory
Generalization Bounds for Decision Trees
COLT '00 Proceedings of the Thirteenth Annual Conference on Computational Learning Theory
Sampling Methods for Action Selection in Influence Diagrams
Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence
Algorithmic stability and ensemble-based learning
Algorithmic stability and ensemble-based learning
Large deviation methods for approximate probabilistic inference
UAI'98 Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence
Subgradient and sampling algorithms for l1 regression
SODA '05 Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms
On prediction using variable order Markov models
Journal of Artificial Intelligence Research
Optimal discovery with probabilistic expert advice: finite time analysis and macroscopic optimality
The Journal of Machine Learning Research
Hi-index | 0.00 |
This paper gives distribution-free concentration inequalities for the missing mass and the error rate of histogram rules. Negative association methods can be used to reduce these concentration problems to concentration questions about independent sums. Although the sums are independent, they are highly heterogeneous. Such highly heterogeneous independent sums cannot be analyzed using standard concentration inequalities such as Hoeffding's inequality, the Angluin-Valiant bound, Bernstein's inequality, Bennett's inequality, or McDiarmid's theorem. The concentration inequality for histogram rule error is motivated by the desire to construct a new class of bounds on the generalization error of decision trees.