Learning Coordinate Covariances via Gradients
The Journal of Machine Learning Research
Learnability of Gaussians with Flexible Variances
The Journal of Machine Learning Research
Derivative reproducing properties for kernel methods in learning theory
Journal of Computational and Applied Mathematics
Spectral algorithms for supervised learning
Neural Computation
Parzen windows for multi-class classification
Journal of Complexity
Asymptotic efficiency of kernel support vector machines (SVM)
Cybernetics and Systems Analysis
Hermite learning with gradient data
Journal of Computational and Applied Mathematics
Classification with Gaussians and Convex Loss
The Journal of Machine Learning Research
Online Learning with Samples Drawn from Non-identical Distributions
The Journal of Machine Learning Research
On Learning with Integral Operators
The Journal of Machine Learning Research
Rademacher chaos complexities for learning the kernel problem
Neural Computation
Optimal learning rates for least squares regularized regression with unbounded sampling
Journal of Complexity
On complexity issues of online learning algorithms
IEEE Transactions on Information Theory
On Convergence of Kernel Learning Estimators
SIAM Journal on Optimization
Regularized online sequential learning algorithm for single-hidden layer feedforward neural networks
Pattern Recognition Letters
Mercer’s theorem, feature maps, and smoothing
COLT'06 Proceedings of the 19th annual conference on Learning Theory
Full length article: Support vector machines regression with l1-regularizer
Journal of Approximation Theory
An approximation theory approach to learning with l1 regularization
Journal of Approximation Theory
Concentration estimates for learning with unbounded sampling
Advances in Computational Mathematics
An explicit description of the extended gaussian kernel
PAKDD'12 Proceedings of the 2012 Pacific-Asia conference on Emerging Trends in Knowledge Discovery and Data Mining
Conditional quantiles with varying Gaussians
Advances in Computational Mathematics
Journal of Approximation Theory
Learning with coefficient-based regularization and ℓ1-penalty
Advances in Computational Mathematics
Least squares regression with l1 -regularizer in sum space
Journal of Computational and Applied Mathematics
Generalization Bounds of Regularization Algorithm with Gaussian Kernels
Neural Processing Letters
Hi-index | 0.09 |
We investigate the problem of model selection for learning algorithms depending on a continuous parameter. We propose a model selection procedure based on a worst-case analysis and on a data-independent choice of the parameter. For the regularized least-squares algorithm we bound the generalization error of the solution by a quantity depending on a few known constants and we show that the corresponding model selection procedure reduces to solving a bias-variance problem. Under suitable smoothness conditions on the regression function, we estimate the optimal parameter as a function of the number of data and we prove that this choice ensures consistency of the algorithm.