Machine Learning - The Eleventh Annual Conference on computational Learning Theory
Equitable Coloring Extends Chernoff-Hoeffding Bounds
APPROX '01/RANDOM '01 Proceedings of the 4th International Workshop on Approximation Algorithms for Combinatorial Optimization Problems and 5th International Workshop on Randomization and Approximation Techniques in Computer Science: Approximation, Randomization and Combinatorial Optimization
The Journal of Machine Learning Research
Pac-bayesian generalisation error bounds for gaussian process classification
The Journal of Machine Learning Research
Rademacher and gaussian complexities: risk bounds and structural results
The Journal of Machine Learning Research
An efficient boosting algorithm for combining preferences
The Journal of Machine Learning Research
Tutorial on Practical Prediction Theory for Classification
The Journal of Machine Learning Research
Large deviations for sums of partly dependent random variables
Random Structures & Algorithms - Isaac Newton Institute Programme “Computation, Combinatorics and Probability”: Part I
Generalization Bounds for the Area Under the ROC Curve
The Journal of Machine Learning Research
Combining PAC-Bayesian and Generic Chaining Bounds
The Journal of Machine Learning Research
PAC-Bayesian learning of linear classifiers
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Generalization Bounds for Ranking Algorithms via Algorithmic Stability
The Journal of Machine Learning Research
COLT'07 Proceedings of the 20th annual conference on Learning theory
Hi-index | 0.00 |
PAC-Bayes bounds are among the most accurate generalization bounds for classifiers learned from independently and identically distributed (IID) data, and it is particularly so for margin classifiers: there have been recent contributions showing how practical these bounds can be either to perform model selection (Ambroladze et al., 2007) or even to directly guide the learning of linear classifiers (Germain et al., 2009). However, there are many practical situations where the training data show some dependencies and where the traditional IID assumption does not hold. Stating generalization bounds for such frameworks is therefore of the utmost interest, both from theoretical and practical standpoints. In this work, we propose the first--to the best of our knowledge--PAC-Bayes generalization bounds for classifiers trained on data exhibiting interdependencies. The approach undertaken to establish our results is based on the decomposition of a so-called dependency graph that encodes the dependencies within the data, in sets of independent data, thanks to graph fractional covers. Our bounds are very general, since being able to find an upper bound on the fractional chromatic number of the dependency graph is sufficient to get new PAC-Bayes bounds for specific settings. We show how our results can be used to derive bounds for ranking statistics (such as AUC) and classifiers trained on data distributed according to a stationary β-mixing process. In the way, we show how our approach seamlessly allows us to deal with U-processes. As a side note, we also provide a PAC-Bayes generalization bound for classifiers learned on data from stationary φ-mixing distributions.