On the rate of convergence of regularized boosting classifiers
The Journal of Machine Learning Research
Fast rates for support vector machines
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Learning from dependent observations
Journal of Multivariate Analysis
Hi-index | 0.00 |
Let (X,Y) be a random couple, X being an observable instance and Y∈ {–1,1} being a binary label to be predicted based on an observation of the instance. Let (Xi, Yi), i=1, . . . , n be training data consisting of n independent copies of (X,Y). Consider a real valued classifier ${\hat{f}_{n}}$ that minimizes the following penalized empirical risk $$\frac{1}{n}\sum\limits_{i=1}^n \ell(Y_{i}f(X_{i})) + \lambda\|Let (X,Y) be a random couple, X being an observable instance and Y∈ {–1,1} being a binary label to be predicted based on an observation of the instance. Let (Xi, Yi), i=1, . . . , n be training data consisting of n independent copies of (X,Y). Consider a real valued classifier ${\hat{f}_{n}}$ that minimizes the following penalized empirical risk $$\frac{1}{n}\sum\limits_{i=1}^n \ell(Y_{i}f(X_{i})) + \lambda\|f\|^{2} \rightarrow {\rm min}, f\in {\mathcal H}$$ over a Hilbert space ${\mathcal H}$ of functions with norm || ·||, ℓ being a convex loss function and λ 0 being a regularization parameter. In particular, ${\mathcal H}$ might be a Sobolev space or a reproducing kernel Hilbert space. We provide some conditions under which the generalization error of the corresponding binary classifier sign $({\hat{f}_{n}})$ converges to the Bayes risk exponentially fast. $|^{2} \rightarrow {\rm min}, f\in {\mathcal H}$$ over a Hilbert space ${\mathcal H}$ of functions with norm || ·||, ℓ being a convex loss function and λ 0 being a regularization parameter. In particular, ${\mathcal H}$ might be a Sobolev space or a reproducing kernel Hilbert space. We provide some conditions under which the generalization error of the corresponding binary classifier sign $({\hat{f}_{n}})$ converges to the Bayes risk exponentially fast.