A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Support vector machines are universally consistent
Journal of Complexity
Sparseness of support vector machines
The Journal of Machine Learning Research
A Classification Framework for Anomaly Detection
The Journal of Machine Learning Research
Estimating the Support of a High-Dimensional Distribution
Neural Computation
Consistency of support vector machines and other regularized kernel classifiers
IEEE Transactions on Information Theory
The Journal of Machine Learning Research
Learning with sample dependent hypothesis spaces
Computers & Mathematics with Applications
Nested support vector machines
IEEE Transactions on Signal Processing
Outlier detection via localized p-value estimation
Allerton'09 Proceedings of the 47th annual Allerton conference on Communication, control, and computing
Complexity-penalized estimation of minimum volume sets for dependent data
Journal of Multivariate Analysis
A theoretical framework for multi-sphere support vector data description
ICONIP'10 Proceedings of the 17th international conference on Neural information processing: models and applications - Volume Part II
Semi-Supervised Novelty Detection
The Journal of Machine Learning Research
Multiple distribution data description learning algorithm for novelty detection
PAKDD'11 Proceedings of the 15th Pacific-Asia conference on Advances in knowledge discovery and data mining - Volume Part II
A novel parameter refinement approach to one class support vector machine
ICONIP'11 Proceedings of the 18th international conference on Neural Information Processing - Volume Part II
Learning Rates for Regularized Classifiers Using Trigonometric Polynomial Kernels
Neural Processing Letters
Security analysis of online centroid anomaly detection
The Journal of Machine Learning Research
One-class classification with Gaussian processes
Pattern Recognition
A bagging SVM to learn from positive and unlabeled examples
Pattern Recognition Letters
A tour of machine learning: An AI perspective
AI Communications - ECAI 2012 Turing and Anniversary Track
Hi-index | 0.00 |
We determine the asymptotic behaviour of the function computed by support vector machines (SVM) and related algorithms that minimize a regularized empirical convex loss function in the reproducing kernel Hilbert space of the Gaussian RBF kernel, in the situation where the number of examples tends to infinity, the bandwidth of the Gaussian kernel tends to 0, and the regularization parameter is held fixed. Non-asymptotic convergence bounds to this limit in the L2 sense are provided, together with upper bounds on the classification error that is shown to converge to the Bayes risk, therefore proving the Bayes-consistency of a variety of methods although the regularization term does not vanish. These results are particularly relevant to the one-class SVM, for which the regularization can not vanish by construction, and which is shown for the first time to be a consistent density level set estimator.