Machine Learning
Least Squares Support Vector Machine Classifiers
Neural Processing Letters
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Support Vector Machines and the Bayes Rule in Classification
Data Mining and Knowledge Discovery
Choosing Multiple Parameters for Support Vector Machines
Machine Learning
The covering number in learning theory
Journal of Complexity
Support vector machines are universally consistent
Journal of Complexity
On the influence of the kernel on the consistency of support vector machines
The Journal of Machine Learning Research
The Journal of Machine Learning Research
On the rate of convergence of regularized boosting classifiers
The Journal of Machine Learning Research
Learning the Kernel Matrix with Semidefinite Programming
The Journal of Machine Learning Research
Regularized multi--task learning
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Support Vector Machine Soft Margin Classifiers: Error Analysis
The Journal of Machine Learning Research
Learning the Kernel Function via Regularization
The Journal of Machine Learning Research
SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming
Neural Computation
Learning Theory: An Approximation Theory Viewpoint (Cambridge Monographs on Applied & Computational Mathematics)
Fast rates for support vector machines
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Efficient agnostic learning of neural networks with bounded fan-in
IEEE Transactions on Information Theory - Part 2
Capacity of reproducing kernel spaces in learning theory
IEEE Transactions on Information Theory
Feature space perspectives for learning the kernel
Machine Learning
Learnability of Gaussians with Flexible Variances
The Journal of Machine Learning Research
Consistency of the Group Lasso and Multiple Kernel Learning
The Journal of Machine Learning Research
Aggregation of SVM Classifiers Using Sobolev Spaces
The Journal of Machine Learning Research
Learning rates for regularized classifiers using multivariate polynomial kernels
Journal of Complexity
Learning with sample dependent hypothesis spaces
Computers & Mathematics with Applications
Error bounds of multi-graph regularized semi-supervised classification
Information Sciences: an International Journal
Oracle inequalities for support vector machines that are based on random entropy numbers
Journal of Complexity
Classification with Gaussians and Convex Loss
The Journal of Machine Learning Research
Online Learning with Samples Drawn from Non-identical Distributions
The Journal of Machine Learning Research
Rademacher chaos complexities for learning the kernel problem
Neural Computation
Optimal learning rates for least squares regularized regression with unbounded sampling
Journal of Complexity
Sparse RBF Networks with Multi-kernels
Neural Processing Letters
Logistic classification with varying Gaussians
Computers & Mathematics with Applications
Full length article: Concentration estimates for the moving least-square method in learning theory
Journal of Approximation Theory
The consistency analysis of coefficient regularized classification with convex loss
WSEAS Transactions on Mathematics
Learning convex combinations of continuously parameterized basic kernels
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Classification with non-i.i.d. sampling
Mathematical and Computer Modelling: An International Journal
Learning Rates for Regularized Classifiers Using Trigonometric Polynomial Kernels
Neural Processing Letters
Learning the coordinate gradients
Advances in Computational Mathematics
A note on extending generalization bounds for binary large-margin classifiers to multiple classes
ECML PKDD'12 Proceedings of the 2012 European conference on Machine Learning and Knowledge Discovery in Databases - Volume Part I
Concentration estimates for learning with unbounded sampling
Advances in Computational Mathematics
Fast learning rates for sparse quantile regression problem
Neurocomputing
Conditional quantiles with varying Gaussians
Advances in Computational Mathematics
Statistical analysis of the moving least-squares method with unbounded sampling
Information Sciences: an International Journal
Hi-index | 0.00 |
A family of classification algorithms generated from Tikhonov regularization schemes are considered. They involve multi-kernel spaces and general convex loss functions. Our main purpose is to provide satisfactory estimates for the excess misclassification error of these multi-kernel regularized classifiers when the loss functions achieve the zero value. The error analysis consists of two parts: regularization error and sample error. Allowing multi-kernels in the algorithm improves the regularization error and approximation error, which is one advantage of the multi-kernel setting. For a general loss function, we show how to bound the regularization error by the approximation in some weighted L^q spaces. For the sample error, we use a projection operator. The projection in connection with the decay of the regularization error enables us to improve convergence rates in the literature even for the one-kernel schemes and special loss functions: least-square loss and hinge loss for support vector machine soft margin classifiers. Existence of the optimization problem for the regularization scheme associated with multi-kernels is verified when the kernel functions are continuous with respect to the index set. Concrete examples, including Gaussian kernels with flexible variances and probability distributions with some noise conditions, are used to illustrate the general theory.