The nature of statistical learning theory
The nature of statistical learning theory
Machine Learning
Least Squares Support Vector Machine Classifiers
Neural Processing Letters
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Pattern Recognition and Neural Networks
Pattern Recognition and Neural Networks
Choosing Multiple Parameters for Support Vector Machines
Machine Learning
Ridge Regression Learning Algorithm in Dual Variables
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Robust Cross-Validation Score Function for Non-linear Function Estimation
ICANN '02 Proceedings of the International Conference on Artificial Neural Networks
The Journal of Machine Learning Research
Convex Optimization
Learning the Kernel Matrix with Semidefinite Programming
The Journal of Machine Learning Research
Bounds on Error Expectation for Support Vector Machines
Neural Computation
Optimally regularised kernel Fisher discriminant classification
Neural Networks
Classifier learning with a new locality regularization method
Pattern Recognition
Classifier learning with a new locality regularization method
Pattern Recognition
Generalization Error Estimation for Non-linear Learning Methods
IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences
Prediction of flashover voltage of insulators using least squares support vector machines
Expert Systems with Applications: An International Journal
Hi-index | 0.00 |
This paper presents a convex optimization perspective towards the task of tuning the regularization trade-off with validation and cross-validation criteria in the context of kernel machines. We focus on the problem of tuning the regularization trade-off in the context of Least Squares Support Vector Machines (LS-SVMs) for function approximation and classification. By adopting an additive regularization trade-off scheme, the task of tuning the regularization trade-off with respect to a validation and cross-validation criterion can be written as a convex optimization problem. The solution of this problem then contains both the optimal regularization constants with respect to the model selection criterion at hand, and the corresponding training solution. We refer to such formulations as the fusion of training with model selection. The major tool to accomplish this task is found in the primal-dual derivations as occuring in convex optimization theory. The paper advances the discussion by relating the additive regularization trade-off scheme with the classical Tikhonov scheme. Motivations are given for the usefulness of the former scheme. Furthermore, it is illustrated how to restrict the additive trade-off scheme towards the solution path corresponding with a Tikhonov scheme while retaining convexity of the overall problem of fusion of model selection and training. We relate such a scheme with an ensemble learning problem and with stability of learning machines. The approach is illustrated on a number of artificial and benchmark datasets relating the proposed method with the classical practice of tuning the Tikhonov scheme with a cross-validation measure.