Support Vector Machine Soft Margin Classifiers: Error Analysis
The Journal of Machine Learning Research
SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming
Neural Computation
Mercer theorem for RKHS on noncompact sets
Journal of Complexity
Multi-kernel regularized classifiers
Journal of Complexity
Learning Coordinate Covariances via Gradients
The Journal of Machine Learning Research
Learnability of Gaussians with Flexible Variances
The Journal of Machine Learning Research
The covering number for some Mercer kernel Hilbert spaces
Journal of Complexity
Derivative reproducing properties for kernel methods in learning theory
Journal of Computational and Applied Mathematics
Classification with a Reject Option using a Hinge Loss
The Journal of Machine Learning Research
Learning with sample dependent hypothesis spaces
Computers & Mathematics with Applications
Regularized Fitted Q-Iteration: Application to Planning
Recent Advances in Reinforcement Learning
Learning from uniformly ergodic Markov chains
Journal of Complexity
Gradient learning in a classification setting by gradient descent
Journal of Approximation Theory
Regularized fitted Q-iteration for planning in continuous-space Markovian decision problems
ACC'09 Proceedings of the 2009 conference on American Control Conference
Model-based and model-free reinforcement learning for visual servoing
ICRA'09 Proceedings of the 2009 IEEE international conference on Robotics and Automation
Hermite learning with gradient data
Journal of Computational and Applied Mathematics
Mercer theorem for RKHS on noncompact sets
Journal of Complexity
Semisupervised multicategory classification with imperfect model
IEEE Transactions on Neural Networks
Classification with Gaussians and Convex Loss
The Journal of Machine Learning Research
Online Learning with Samples Drawn from Non-identical Distributions
The Journal of Machine Learning Research
Generalization performance of ERM algorithm with geometrically ergodic Markov chain samples
ICNC'09 Proceedings of the 5th international conference on Natural computation
Variable weighted learning algorithm and its convergence rate
ICNC'09 Proceedings of the 5th international conference on Natural computation
Semi-supervised learning based on high density region estimation
Neural Networks
Optimal learning rates for least squares regularized regression with unbounded sampling
Journal of Complexity
On complexity issues of online learning algorithms
IEEE Transactions on Information Theory
Logistic classification with varying Gaussians
Computers & Mathematics with Applications
Least square regression with lp-coefficient regularization
Neural Computation
Covering numbers of Gaussian reproducing kernel Hilbert spaces
Journal of Complexity
Full length article: Concentration estimates for the moving least-square method in learning theory
Journal of Approximation Theory
The consistency analysis of coefficient regularized classification with convex loss
WSEAS Transactions on Mathematics
Generalization bounds of ERM algorithm with V-geometrically Ergodic Markov chains
Advances in Computational Mathematics
Asymptotic normality of support vector machine variants and other regularized kernel methods
Journal of Multivariate Analysis
Application of integral operator for regularized least-square regression
Mathematical and Computer Modelling: An International Journal
Classification with non-i.i.d. sampling
Mathematical and Computer Modelling: An International Journal
Learning Rates for Regularized Classifiers Using Trigonometric Polynomial Kernels
Neural Processing Letters
An approximation theory approach to learning with l1 regularization
Journal of Approximation Theory
Concentration estimates for learning with unbounded sampling
Advances in Computational Mathematics
Learning theory approach to minimum error entropy criterion
The Journal of Machine Learning Research
Statistical analysis of the moving least-squares method with unbounded sampling
Information Sciences: an International Journal
Hi-index | 754.90 |
The capacity of reproducing kernel Hilbert spaces (RKHS) plays an essential role in the analysis of learning theory. Covering numbers and packing numbers of balls of these reproducing kernel spaces are important measurements of this capacity. We first present lower bound estimates for the packing numbers by means of nodal functions. Then we show that if a Mercer kernel is Cs (for some s0 being not an even integer), the RKHS associated with this kernel can be embedded into Cs2/. This gives upper-bound estimates for the covering number concerning Sobolev smooth kernels.Examples and applications to Vγ dimension and Tikhonov (1977) regularization are presented to illustrate the upper- and lower-bound estimates.