The nature of statistical learning theory
The nature of statistical learning theory
Properties of support vector machines
Neural Computation
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
SSVM: A Smooth Support Vector Machine for Classification
Computational Optimization and Applications
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
A Unified Loss Function in Bayesian Framework for Support Vector Regression
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Sparseness of support vector machines
The Journal of Machine Learning Research
Some Properties of Regularized Kernel Methods
The Journal of Machine Learning Research
Smooth minimization of non-smooth functions
Mathematical Programming: Series A and B
Smooth ε-Insensitive Regression by Loss Symmetrization
The Journal of Machine Learning Research
Convergence of the IRWLS Procedure to the Support Vector Machine Solution
Neural Computation
Training a Support Vector Machine in the Primal
Neural Computation
Efficient Computation and Model Selection for the Support Vector Regression
Neural Computation
Pegasos: Primal Estimated sub-GrAdient SOlver for SVM
Proceedings of the 24th international conference on Machine learning
On the Representer Theorem and Equivalent Degrees of Freedom of SVR
The Journal of Machine Learning Research
Simplicial Algorithms for Minimizing Polyhedral Functions
Simplicial Algorithms for Minimizing Polyhedral Functions
Restoration of images corrupted by Gaussian and uniform impulsive noise
Pattern Recognition
Bound the learning rates with generalized gradients
WSEAS Transactions on Signal Processing
Hi-index | 0.00 |
The representer theorem for kernel methods states that the solution of the associated variational problem can be expressed as the linear combination of a finite number of kernel functions. However, for non-smooth loss functions, the analytic characterization of the coefficients poses nontrivial problems. Standard approaches resort to constrained optimization reformulations which, in general, lack a closed-form solution. Herein, by a proper change of variable, it is shown that, for any convex loss function, the coefficients satisfy a system of algebraic equations in a fixed-point form, which may be directly obtained from the primal formulation. The algebraic characterization is specialized to regression and classification methods and the fixed-point equations are explicitly characterized for many loss functions of practical interest. The consequences of the main result are then investigated along two directions. First, the existence of an unconstrained smooth reformulation of the original non-smooth problem is proven. Second, in the context of SURE (Stein's Unbiased Risk Estimation), a general formula for the degrees of freedom of kernel regression methods is derived.