Efficient distribution-free learning of probabilistic concepts
Journal of Computer and System Sciences - Special issue: 31st IEEE conference on foundations of computer science, Oct. 22–24, 1990
Regularization theory and neural networks architectures
Neural Computation
Scale-sensitive dimensions, uniform convergence, and learnability
Journal of the ACM (JACM)
Generalization performance of support vector machines and other pattern classifiers
Advances in kernel methods
A Note on a Scale-Sensitive Dimension of Linear Bounded Functionals in Banach Spaces
ALT '97 Proceedings of the 8th International Conference on Algorithmic Learning Theory
A Unified Framework for Regularization Networks and Support Vector Machines
A Unified Framework for Regularization Networks and Support Vector Machines
A Note on the Generalization Performance of Kernel Classifiers with Margin
ALT '00 Proceedings of the 11th International Conference on Algorithmic Learning Theory
Learning Rates for Regularized Classifiers Using Trigonometric Polynomial Kernels
Neural Processing Letters
Generalization Bounds of Regularization Algorithm with Gaussian Kernels
Neural Processing Letters
Hi-index | 0.00 |
This paper presents a computation of the Vγ dimension for regression in bounded subspaces of Reproducing Kernel Hilbert Spaces (RKHS) for the Support Vector Machine (SVM) regression Ɛ-insensitive loss function LƐ, and general Lp loss functions. Finiteness of the Vγ dimension is shown, which also proves uniform convergence in probability for regression machines in RKHS subspaces that use the LƐ or general Lp loss functions. This paper presents a novel proof of this result. It also presents a computation of an upper bound of the Vγ dimension under some conditions, that leads to an approach for the estimation of the empirical Vγ dimension given a set of training data.