Machine learning with data dependent hypothesis classes
The Journal of Machine Learning Research
Rademacher and gaussian complexities: risk bounds and structural results
The Journal of Machine Learning Research
Selective Rademacher Penalization and Reduced Error Pruning of Decision Trees
The Journal of Machine Learning Research
The Journal of Machine Learning Research
Learnability of Gaussians with Flexible Variances
The Journal of Machine Learning Research
A Hilbert Space Embedding for Distributions
ALT '07 Proceedings of the 18th international conference on Algorithmic Learning Theory
Learning from Multiple Sources
The Journal of Machine Learning Research
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
Classification Using Geometric Level Sets
The Journal of Machine Learning Research
Exact combinatorial bounds on the probability of overfitting for empirical risk minimization
Pattern Recognition and Image Analysis
Rademacher Complexities and Bounding the Excess Risk in Active Learning
The Journal of Machine Learning Research
PAC-Bayesian Analysis of Co-clustering and Beyond
The Journal of Machine Learning Research
Non-asymptotic quality assessment of generalised FIR models with periodic inputs
Automatica (Journal of IFAC)
Multisource domain adaptation and its application to early detection of fatigue
ACM Transactions on Knowledge Discovery from Data (TKDD) - Special Issue on the Best of SIGKDD 2011
ACM SIGAPP Applied Computing Review
Risk bounds of learning processes for Lévy processes
The Journal of Machine Learning Research
On the convergence rate of lp-norm multiple kernel learning
The Journal of Machine Learning Research
Universal learning using free multivariate splines
Neurocomputing
Maximum volume clustering: a new discriminative clustering approach
The Journal of Machine Learning Research
Hi-index | 754.84 |
We suggest a penalty function to be used in various problems of structural risk minimization. This penalty is data dependent and is based on the sup-norm of the so-called Rademacher process indexed by the underlying class of functions (sets). The standard complexity penalties, used in learning problems and based on the VC-dimensions of the classes, are conservative upper bounds (in a probabilistic sense, uniformly over the set of all underlying distributions) for the penalty we suggest. Thus, for a particular distribution of training examples, one can expect better performance of learning algorithms with the data-driven Rademacher penalties. We obtain oracle inequalities for the theoretical risk of estimators, obtained by structural minimization of the empirical risk with Rademacher penalties. The inequalities imply some form of optimality of the empirical risk minimizers. We also suggest an iterative approach to structural risk minimization with Rademacher penalties, in which the hierarchy of classes is not given in advance, but is determined in the data-driven iterative process of risk minimization. We prove probabilistic oracle inequalities for the theoretical risk of the estimators based on this approach as well