Machine Learning - The Eleventh Annual Conference on computational Learning Theory
Model Selection and Error Estimation
Machine Learning
PAC-Bayesian Stochastic Model Selection
Machine Learning
An Improved Predictive Accuracy Bound for Averaging Classifiers
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
The Consistency of Greedy Algorithms for Classification
COLT '02 Proceedings of the 15th Annual Conference on Computational Learning Theory
A Consistent Strategy for Boosting Algorithms
COLT '02 Proceedings of the 15th Annual Conference on Computational Learning Theory
Localized Rademacher Complexities
COLT '02 Proceedings of the 15th Annual Conference on Computational Learning Theory
The Journal of Machine Learning Research
Covering number bounds of certain regularized linear function classes
The Journal of Machine Learning Research
Pac-bayesian generalisation error bounds for gaussian process classification
The Journal of Machine Learning Research
Rademacher and gaussian complexities: risk bounds and structural results
The Journal of Machine Learning Research
Convex Optimization
Structural risk minimization over data-dependent hierarchies
IEEE Transactions on Information Theory
Minimax nonparametric classification .I. Rates of convergence
IEEE Transactions on Information Theory
Sparse Multinomial Logistic Regression: Fast Algorithms and Generalization Bounds
IEEE Transactions on Pattern Analysis and Machine Intelligence
Complexity of pattern classes and the Lipschitz property
Theoretical Computer Science
Improving SVM classifiers training using artificial samples
ICCOMP'07 Proceedings of the 11th WSEAS International Conference on Computers
Transductive Rademacher complexity and its applications
Journal of Artificial Intelligence Research
Transductive rademacher complexity and its applications
COLT'07 Proceedings of the 20th annual conference on Learning theory
Rademacher chaos complexities for learning the kernel problem
Neural Computation
Structured sparsity and generalization
The Journal of Machine Learning Research
Regularization techniques for learning with matrices
The Journal of Machine Learning Research
Maximum volume clustering: a new discriminative clustering approach
The Journal of Machine Learning Research
Hi-index | 0.00 |
Bayesian approaches to learning and estimation have played a significant role in the Statistics literature over many years. While they are often provably optimal in a frequentist setting, and lead to excellent performance in practical applications, there have not been many precise characterizations of their performance for finite sample sizes under general conditions. In this paper we consider the class of Bayesian mixture algorithms, where an estimator is formed by constructing a data-dependent mixture over some hypothesis space. Similarly to what is observed in practice, our results demonstrate that mixture approaches are particularly robust, and allow for the construction of highly complex estimators, while avoiding undesirable overfitting effects. Our results, while being data-dependent in nature, are insensitive to the underlying model assumptions, and apply whether or not these hold. At a technical level, the approach applies to unbounded functions, constrained only by certain moment conditions. Finally, the bounds derived can be directly applied to non-Bayesian mixture approaches such as Boosting and Bagging.