Support vector machines for dynamic reconstruction of a chaotic system
Advances in kernel methods
A tutorial on support vector regression
Statistics and Computing
Support vector machines combined with feature selection for breast cancer diagnosis
Expert Systems with Applications: An International Journal
Linear dependency between ε and the input noise in ε-support vector regression
IEEE Transactions on Neural Networks
Multi-parametric gaussian kernel function optimization for ε-SVMr using a genetic algorithm
IWANN'11 Proceedings of the 11th international conference on Artificial neural networks conference on Advances in computational intelligence - Volume Part II
HAIS'10 Proceedings of the 5th international conference on Hybrid Artificial Intelligence Systems - Volume Part II
Expert Systems with Applications: An International Journal
Expert Systems with Applications: An International Journal
Low-cost model selection for SVMs using local features
Engineering Applications of Artificial Intelligence
Hi-index | 0.01 |
The selection of hyper-parameters in support vector machines (SVM) is a key point in the training process of these models when applied to regression problems. Unfortunately, an exact method to obtain the optimal set of SVM hyper-parameters is unknown, and search algorithms are usually applied to obtain the best possible set of hyper-parameters. In general these search algorithms are implemented as grid searches, which are time consuming, so the computational cost of the SVM training process increases considerably. This paper presents a novel study of the effect of including reductions in the range of SVM hyper-parameters, in order to reduce the SVM training time, but with the minimum possible impact in its performance. The paper presents reduction in parameter C, by considering its relation with the rest of SVM hyper-parameters (@c and @e), through an approximation of the SVM model. On the other hand, we use some characteristics of the Gaussian kernel function and a previous result in the literature to obtain novel bounds for @c and @e hyper-parameters. The search space reductions proposed are evaluated in different regression problems from UCI and StatLib databases. All the experiments carried out applying the popular LIBSVM solver have shown that our approach reduces the SVM training time, maintaining the SVM performance similar to when the complete range in SVM parameters is considered.