Kernel principal component analysis
Advances in kernel methods
Mean field methods for classification with Gaussian processes
Proceedings of the 1998 conference on Advances in neural information processing systems II
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Machine Learning
Choosing Multiple Parameters for Support Vector Machines
Machine Learning
Improved Pairwise Coupling Classification with Correcting Classifiers
ECML '98 Proceedings of the 10th European Conference on Machine Learning
Estimating the Generalization Performance of an SVM Efficiently
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Empirical Error based Optimization of SVM Kernels: Application to Digit Image Recognition
IWFHR '02 Proceedings of the Eighth International Workshop on Frontiers in Handwriting Recognition (IWFHR'02)
Radius margin bounds for support vector machines with the RBF kernel
Neural Computation
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
Fast SVM Training Algorithm with Decomposition on Very Large Data Sets
IEEE Transactions on Pattern Analysis and Machine Intelligence
Estimation of Dependences Based on Empirical Data: Springer Series in Statistics (Springer Series in Statistics)
Gradient-Based Optimization of Hyperparameters
Neural Computation
Bounds on Error Expectation for Support Vector Machines
Neural Computation
Automatic model selection for the optimization of SVM kernels
Pattern Recognition
Short Communication: A geometric method for model selection in support vector machine
Expert Systems with Applications: An International Journal
Model selection for the LS-SVM. Application to handwriting recognition
Pattern Recognition
An ACO-based algorithm for parameter optimization of support vector machines
Expert Systems with Applications: An International Journal
Help-Training for semi-supervised support vector machines
Pattern Recognition
Hi-index | 0.01 |
Tuning support vector machine (SVM) hyperparameters is an important step in achieving a high-performance learning machine. It is usually done by minimizing an estimate of generalization error based on the bounds of the leave-one-out (LOO) such as radius-margin bound and on the performance measures such as generalized approximate cross-validation (GACV), empirical error, etc. These usual automatic methods used to tune the hyperparameters require an inversion of the Gram-Schmidt matrix or a resolution of an extra-quadratic programming problem. In the case of a large data set these methods require the addition of huge amounts of memory and a long CPU time to the already significant resources used in SVM training. In this paper, we propose a fast method based on an approximation of the gradient of the empirical error, along with incremental learning, which reduces the resources required both in terms of processing time and of storage space. We tested our method on several benchmarks, which produced promising results confirming our approach. Furthermore, it is worth noting that the gain time increases when the data set is large.