The nature of statistical learning theory
The nature of statistical learning theory
Semi-supervised support vector machines
Proceedings of the 1998 conference on Advances in neural information processing systems II
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Learning from Data: Concepts, Theory, and Methods
Learning from Data: Concepts, Theory, and Methods
Learning in Neural Networks: Theoretical Foundations
Learning in Neural Networks: Theoretical Foundations
Choosing Multiple Parameters for Support Vector Machines
Machine Learning
The covering number in learning theory
Journal of Complexity
Leave-one-out bounds for kernel methods
Neural Computation
The Journal of Machine Learning Research
On the rate of convergence of regularized boosting classifiers
The Journal of Machine Learning Research
Learning the Kernel Matrix with Semidefinite Programming
The Journal of Machine Learning Research
Semi-Supervised Learning on Riemannian Manifolds
Machine Learning
Support Vector Machine Soft Margin Classifiers: Error Analysis
The Journal of Machine Learning Research
Learning the Kernel Function via Regularization
The Journal of Machine Learning Research
Universal Algorithms for Learning Theory Part I : Piecewise Constant Functions
The Journal of Machine Learning Research
Estimation of Dependences Based on Empirical Data: Springer Series in Statistics (Springer Series in Statistics)
SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming
Neural Computation
Learning Rates of Least-Square Regularized Regression
Foundations of Computational Mathematics
Multi-kernel regularized classifiers
Journal of Complexity
Consistency and Convergence Rates of One-Class SVMs and Related Algorithms
The Journal of Machine Learning Research
Learnability of Gaussians with Flexible Variances
The Journal of Machine Learning Research
Fast rates for support vector machines
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Capacity of reproducing kernel spaces in learning theory
IEEE Transactions on Information Theory
Least square regression with lp-coefficient regularization
Neural Computation
The consistency analysis of coefficient regularized classification with convex loss
WSEAS Transactions on Mathematics
Bound the learning rates with generalized gradients
WSEAS Transactions on Signal Processing
Full length article: Support vector machines regression with l1-regularizer
Journal of Approximation Theory
Full length article: Regularization networks with indefinite kernels
Journal of Approximation Theory
An approximation theory approach to learning with l1 regularization
Journal of Approximation Theory
Concentration estimates for learning with unbounded sampling
Advances in Computational Mathematics
Approximation and estimation bounds for free knot splines
Computers & Mathematics with Applications
Learning with coefficient-based regularization and ℓ1-penalty
Advances in Computational Mathematics
Hi-index | 0.09 |
Many learning algorithms use hypothesis spaces which are trained from samples, but little theoretical work has been devoted to the study of these algorithms. In this paper we show that mathematical analysis for these algorithms is essentially different from that for algorithms with hypothesis spaces independent of the sample or depending only on the sample size. The difficulty lies in the lack of a proper characterization of approximation error. To overcome this difficulty, we propose an idea of using a larger function class (not necessarily linear space) containing the union of all possible hypothesis spaces (varying with the sample) to measure the approximation ability of the algorithm. We show how this idea provides error analysis for two particular classes of learning algorithms in kernel methods: learning the kernel via regularization and coefficient based regularization. We demonstrate the power of this approach by its wide applicability.